官术网_书友最值得收藏!

Training neural networks with backpropagation

Calculating the activation of a neuron, the forward part, or what we call feed-forward propagation, is quite straightforward to process. The complexity we encounter now is training the errors back through the network. When we train the network now, we start at the last output layer and determine the total error, just as we did with a single perceptron, but now we need to sum up all errors across the output layer. Then we need to use this value to backpropagate the error back through the network, updating each of the weights based on their contribution to the total error. Understanding the contribution of a single weight in a network with thousands or millions of weights could be quite complicated, except thankfully for the help of differentiation and the chain rule. Before we get to the complicated math, we first need to discuss the Cost function and how we calculate errors in the next section.

While the math of backpropagation is complicated and may be intimidating, at some point, you will want or need to understand it well. However, for the purposes of this book, you can omit or just revisit this section as needed. All the networks we develop in later chapters will automatically handle backpropagation for us. Of course, you can't run away from the math either; it is everywhere in deep learning.
主站蜘蛛池模板: 巩义市| 乐安县| 富裕县| 托克托县| 子长县| 徐州市| 营口市| 西乌珠穆沁旗| 峨眉山市| 拜泉县| 如皋市| 沽源县| 吉安市| 扎赉特旗| 昆山市| 盱眙县| 广元市| 沂南县| 柘城县| 敦煌市| 永善县| 双峰县| 荔浦县| 金溪县| 青岛市| 三亚市| 嘉峪关市| 尚志市| 佛山市| 扶风县| 山西省| 图们市| 肃宁县| 霍邱县| 铁岭市| 拜泉县| 慈溪市| 商河县| 繁峙县| 山西省| 洞头县|