官术网_书友最值得收藏!

Backpropagation

The goal of the training algorithm is to find the weights and biases of the network that minimize a certain loss function, which depends on the prediction output and the true labels or values. To accomplish this, the gradients of the loss function, with respect to the weights and biases, are computed at the output, and the errors are propagated backward, up to the input layer. These propagated errors are, in turn, used to compute the gradients of all of the intermediate layers, up to the input layer. This technique of computing gradients is called backpropagation. During each iteration of the process, the current error in the output prediction is propagated backward through the network, to compute gradients with respect to each layer's weights and biases.

This approach is depicted in the following diagram:

The training algorithm called gradient descent, utilizes backpropagation to update weights and biases. That algorithm will be explained next. 

主站蜘蛛池模板: 长白| 阜康市| 南川市| 亳州市| 西盟| 商城县| 阿瓦提县| 枣阳市| 贡山| 桐庐县| 湘潭县| 清流县| 定西市| 兴和县| 延川县| 无极县| 大兴区| 遂昌县| 建德市| 安图县| 平阴县| 吉水县| 武汉市| 济源市| 安乡县| 松原市| 新蔡县| 溧水县| 大港区| 柞水县| 宾阳县| 象州县| 浦北县| 孟连| 曲靖市| 天台县| 罗甸县| 隆安县| 葫芦岛市| 和静县| 常德市|