官术网_书友最值得收藏!

Training neural networks 

We have seen how neural networks can map inputs onto determined outputs, depending on fixed weights. Once the architecture of the neural network has been defined and includes the feed forward network, the number of hidden layers, the number of neurons per layer, and the activation function, we'll need to set the weights, which, in turn, will define the internal states for each neuron in the network. First, we'll see how to do that for a 1-layer network using an optimization algorithm called gradient descent, and then we'll extend it to a deep feed forward network with the help of backpropagation.

The general concept we need to understand is the following:

Every neural network is an approximation of a function, so each neural network will not be equal to the desired function, but instead will differ by some value called error. During training, the aim is to minimize this error. Since the error is a function of the weights of the network, we want to minimize the error with respect to the weights. The error function is a function of many weights and, therefore, a function of many variables. Mathematically, the set of points where this function is zero represents a hypersurface, and to find a minimum on this surface, we want to pick a point and then follow a curve in the direction of the minimum.

We should note that a neural network and its training are two separate things. This means we can adjust the weights of the network in some way other than gradient descent and backpropagation, but this is the most popular and efficient way to do so and is, ostensibly, the only way that is currently used in practice.
主站蜘蛛池模板: 喀喇沁旗| 丰原市| 定西市| 方山县| 乌拉特后旗| 阜新市| 东丽区| 中阳县| 昂仁县| 云阳县| 志丹县| 德江县| 高安市| 山西省| 敖汉旗| 金堂县| 昭觉县| 嵊泗县| 堆龙德庆县| 武冈市| 乐亭县| 成武县| 万宁市| 吕梁市| 三明市| 兴业县| 台南市| 来宾市| 出国| 铁岭县| 酉阳| 肥西县| 潜江市| 合川市| 通海县| 瓮安县| 来凤县| 句容市| 武宁县| 宜兰县| 文山县|