官术网_书友最值得收藏!

Understanding backpropagation

When a feedforward neural network is used to accept an input x and produce an output y?, information flows forward through the network elements. The input x provides the information that then propagates up to the hidden units at each layer and produces y?. This is called forward propagation. During training, forward propagation continues onward until it produces a scalar cost J(θ). The backpropagation algorithm, often called backprop, allows the information from the cost to then flow backward through the network in order to compute the gradient.

Computing an analytical expression for the gradient is straightforward, but numerically evaluating such an expression can be computationally expensive. The backpropagation algorithm does so using a simple and inexpensive procedure.

Backpropagation refers only to the method to compute the gradient, while another algorithm, such as stochastic gradient descent, refers to the actual mechanism.
主站蜘蛛池模板: 绥棱县| 沅陵县| 长丰县| 蓝山县| 辽宁省| 察雅县| 山东省| 南川市| 历史| 南昌县| 麻栗坡县| 阿尔山市| 星子县| 南靖县| 梓潼县| 三江| 神池县| 峡江县| 吐鲁番市| 盘山县| 漳浦县| 大石桥市| 河源市| 惠东县| 麻江县| 清远市| 崇义县| 临武县| 尤溪县| 松原市| 察雅县| 诏安县| 达日县| 盐边县| 舒城县| 巍山| 武城县| 泸西县| 吉安市| 尼玛县| 乌拉特后旗|