- Hands-On Natural Language Processing with Python
- Rajesh Arumugam Rajalingappaa Shanmugamani
- 155字
- 2021-08-13 16:01:47
Backpropagation
The goal of the training algorithm is to find the weights and biases of the network that minimize a certain loss function, which depends on the prediction output and the true labels or values. To accomplish this, the gradients of the loss function, with respect to the weights and biases, are computed at the output, and the errors are propagated backward, up to the input layer. These propagated errors are, in turn, used to compute the gradients of all of the intermediate layers, up to the input layer. This technique of computing gradients is called backpropagation. During each iteration of the process, the current error in the output prediction is propagated backward through the network, to compute gradients with respect to each layer's weights and biases.
This approach is depicted in the following diagram:

The training algorithm called gradient descent, utilizes backpropagation to update weights and biases. That algorithm will be explained next.
- Practical UX Design
- Boost C++ Application Development Cookbook(Second Edition)
- Python爬蟲開發:從入門到實戰(微課版)
- C程序設計實踐教程
- ServiceNow:Building Powerful Workflows
- 軟件測試綜合技術
- 從零開始學Python網絡爬蟲
- Laravel Application Development Blueprints
- Spring Boot從入門到實戰
- Clojure Web Development Essentials
- C# 7 and .NET Core 2.0 Blueprints
- Tkinter GUI Programming by Example
- Web前端開發精品課:HTML5 Canvas開發詳解
- 零基礎學C++
- Pentaho Analytics for MongoDB Cookbook