官术网_书友最值得收藏!

  • Python Deep Learning
  • Ivan Vasilev Daniel Slater Gianmario Spacagna Peter Roelants Valentino Zocca
  • 297字
  • 2021-07-02 14:31:05

Training neural networks 

We have seen how neural networks can map inputs onto determined outputs, depending on fixed weights. Once the architecture of the neural network has been defined and includes the feed forward network, the number of hidden layers, the number of neurons per layer, and the activation function, we'll need to set the weights, which, in turn, will define the internal states for each neuron in the network. First, we'll see how to do that for a 1-layer network using an optimization algorithm called gradient descent, and then we'll extend it to a deep feed forward network with the help of backpropagation.

The general concept we need to understand is the following:

Every neural network is an approximation of a function, so each neural network will not be equal to the desired function, but instead will differ by some value called error. During training, the aim is to minimize this error. Since the error is a function of the weights of the network, we want to minimize the error with respect to the weights. The error function is a function of many weights and, therefore, a function of many variables. Mathematically, the set of points where this function is zero represents a hypersurface, and to find a minimum on this surface, we want to pick a point and then follow a curve in the direction of the minimum.

We should note that a neural network and its training are two separate things. This means we can adjust the weights of the network in some way other than gradient descent and backpropagation, but this is the most popular and efficient way to do so and is, ostensibly, the only way that is currently used in practice.
主站蜘蛛池模板: 石家庄市| 星子县| 灵寿县| 九龙县| 顺昌县| 芮城县| 建阳市| 哈密市| 赤壁市| 长宁县| 马关县| 霍州市| 讷河市| 扶沟县| 马鞍山市| 武安市| 兴安县| 都江堰市| 额尔古纳市| 青川县| 莱芜市| 溧阳市| 赤城县| 宕昌县| 东源县| 克什克腾旗| 准格尔旗| 陆川县| 枝江市| 霍山县| 长子县| 古交市| 香港| 定南县| 永济市| 黔江区| 马龙县| 安图县| 九龙城区| 阜城县| 赣州市|