官术网_书友最值得收藏!

Backpropagation

The goal of the training algorithm is to find the weights and biases of the network that minimize a certain loss function, which depends on the prediction output and the true labels or values. To accomplish this, the gradients of the loss function, with respect to the weights and biases, are computed at the output, and the errors are propagated backward, up to the input layer. These propagated errors are, in turn, used to compute the gradients of all of the intermediate layers, up to the input layer. This technique of computing gradients is called backpropagation. During each iteration of the process, the current error in the output prediction is propagated backward through the network, to compute gradients with respect to each layer's weights and biases.

This approach is depicted in the following diagram:

The training algorithm called gradient descent, utilizes backpropagation to update weights and biases. That algorithm will be explained next. 

主站蜘蛛池模板: 衡阳县| 昆明市| 柞水县| 白玉县| 大丰市| 呈贡县| 安龙县| 博兴县| 寿宁县| 内乡县| 根河市| 镇平县| 大英县| 白玉县| 鄂尔多斯市| 仪征市| 军事| 石阡县| 德江县| 贞丰县| 彰化县| 玉龙| 攀枝花市| 庄浪县| 武山县| 改则县| 禹城市| 沧州市| 沈阳市| 太保市| 积石山| 永川市| 吉隆县| 台北市| 泗水县| 杨浦区| 德令哈市| 漳州市| 禹州市| 亚东县| 达州市|