官术网_书友最值得收藏!

Training ML algorithms from data

A typical preprocessed dataset is formally defined as follows:

Where y is the desired output corresponding to the input vector x. So, the motivation of ML is to use the data to find linear and non-linear transformations over x using highly complex tensor (vector) multiplications and additions, or to simply find ways to measure similarities or distances among data points, with the ultimate purpose of predicting y given x.

A common way of thinking about this is that we want to approximate some unknown function over x:

Where w is an unknown vector that facilitates the transformation of x along with b. This formulation is very basic, linear, and is simply an illustration of what a simple learning model would look like. In this simple case, the ML algorithms revolve around finding the best w and b that yields the closest (if not perfect) approximation to y, the desired output. Very simple algorithms such as the perceptron (Rosenblatt, F. 1958) try different values for w and b using past mistakes in the choices of w and b to make the next selection in proportion to the mistakes made. 

A combination of perceptron-like models that look at the same input, intuitively, turned out to be better than single ones. Later, people realized that having them stacked may be the next logical step leading to multilayer perceptrons, but the problem was that the learning process was rather complicated for people in the 1970s. These kinds of multilayered systems were analog to brain neurons, which is the reason we call them neural networks today. With some interesting discoveries in ML, new specific kinds of neural networks and algorithms were created known as deep learning.

主站蜘蛛池模板: 射洪县| 阜宁县| 图木舒克市| 石首市| 西林县| 铜川市| 岑巩县| 平远县| 新安县| 西平县| 靖安县| 许昌县| 原平市| 精河县| 菏泽市| 吉林市| 庄浪县| 娄底市| 江西省| 芮城县| 洞口县| 邢台县| 泾阳县| 青神县| 肇东市| 临夏县| 北宁市| 广汉市| 合水县| 鄂温| 南宫市| 若羌县| 乌兰浩特市| 监利县| 灵宝市| 舞钢市| 舞阳县| 民权县| 顺平县| 垣曲县| 扎囊县|