官术网_书友最值得收藏!

Backpropagation

The goal of the training algorithm is to find the weights and biases of the network that minimize a certain loss function, which depends on the prediction output and the true labels or values. To accomplish this, the gradients of the loss function, with respect to the weights and biases, are computed at the output, and the errors are propagated backward, up to the input layer. These propagated errors are, in turn, used to compute the gradients of all of the intermediate layers, up to the input layer. This technique of computing gradients is called backpropagation. During each iteration of the process, the current error in the output prediction is propagated backward through the network, to compute gradients with respect to each layer's weights and biases.

This approach is depicted in the following diagram:

The training algorithm called gradient descent, utilizes backpropagation to update weights and biases. That algorithm will be explained next. 

主站蜘蛛池模板: 聂拉木县| 湖口县| 岱山县| 长沙县| 阜新市| 两当县| 莱州市| 辽中县| 九龙坡区| 河间市| 甘洛县| 莱阳市| 怀远县| 个旧市| 江西省| 凤冈县| 沂水县| 通江县| 洪洞县| 天长市| 秭归县| 鄯善县| 黄骅市| 西宁市| 乌兰察布市| 汉阴县| 乌兰浩特市| 商河县| 武穴市| 仪征市| 长兴县| 宁波市| 专栏| 绥德县| 连州市| 高安市| 东乌| 嵩明县| 南澳县| 北票市| 客服|