官术网_书友最值得收藏!

Understanding backpropagation

When a feedforward neural network is used to accept an input x and produce an output y?, information flows forward through the network elements. The input x provides the information that then propagates up to the hidden units at each layer and produces y?. This is called forward propagation. During training, forward propagation continues onward until it produces a scalar cost J(θ). The backpropagation algorithm, often called backprop, allows the information from the cost to then flow backward through the network in order to compute the gradient.

Computing an analytical expression for the gradient is straightforward, but numerically evaluating such an expression can be computationally expensive. The backpropagation algorithm does so using a simple and inexpensive procedure.

Backpropagation refers only to the method to compute the gradient, while another algorithm, such as stochastic gradient descent, refers to the actual mechanism.
主站蜘蛛池模板: 无为县| 贵阳市| 兰州市| 桐乡市| 卓尼县| 多伦县| 钟祥市| 南郑县| 尖扎县| 辽阳市| 颍上县| 莱芜市| 铜鼓县| 舞钢市| 忻州市| 海安县| 临海市| 五指山市| 长春市| 新宁县| 嵊州市| 凉城县| 福鼎市| 长丰县| 阿拉善右旗| 江山市| 汉阴县| 固镇县| 阿拉善右旗| 吉隆县| 辽阳县| 甘谷县| 胶州市| 天台县| 隆德县| 黄冈市| 青海省| 长岭县| 吉首市| 安乡县| 甘肃省|