- Hands-On Natural Language Processing with Python
- Rajesh Arumugam Rajalingappaa Shanmugamani
- 155字
- 2021-08-13 16:01:47
Backpropagation
The goal of the training algorithm is to find the weights and biases of the network that minimize a certain loss function, which depends on the prediction output and the true labels or values. To accomplish this, the gradients of the loss function, with respect to the weights and biases, are computed at the output, and the errors are propagated backward, up to the input layer. These propagated errors are, in turn, used to compute the gradients of all of the intermediate layers, up to the input layer. This technique of computing gradients is called backpropagation. During each iteration of the process, the current error in the output prediction is propagated backward through the network, to compute gradients with respect to each layer's weights and biases.
This approach is depicted in the following diagram:

The training algorithm called gradient descent, utilizes backpropagation to update weights and biases. That algorithm will be explained next.
- C++程序設計(第3版)
- Python數(shù)據(jù)分析入門與實戰(zhàn)
- Redis Applied Design Patterns
- DevOps for Networking
- JavaFX Essentials
- C語言程序設計案例式教程
- 假如C語言是我發(fā)明的:講給孩子聽的大師編程課
- Learning Hunk
- 軟件品質之完美管理:實戰(zhàn)經(jīng)典
- 小型編譯器設計實踐
- Getting Started with Polymer
- Scala編程實戰(zhàn)
- 從程序員角度學習數(shù)據(jù)庫技術(藍橋杯軟件大賽培訓教材-Java方向)
- 零基礎學C語言(第4版)
- Mastering Concurrency in Python