- Hands-On Natural Language Processing with Python
- Rajesh Arumugam Rajalingappaa Shanmugamani
- 155字
- 2021-08-13 16:01:47
Backpropagation
The goal of the training algorithm is to find the weights and biases of the network that minimize a certain loss function, which depends on the prediction output and the true labels or values. To accomplish this, the gradients of the loss function, with respect to the weights and biases, are computed at the output, and the errors are propagated backward, up to the input layer. These propagated errors are, in turn, used to compute the gradients of all of the intermediate layers, up to the input layer. This technique of computing gradients is called backpropagation. During each iteration of the process, the current error in the output prediction is propagated backward through the network, to compute gradients with respect to each layer's weights and biases.
This approach is depicted in the following diagram:

The training algorithm called gradient descent, utilizes backpropagation to update weights and biases. That algorithm will be explained next.
- Vue 3移動Web開發與性能調優實戰
- 黑客攻防從入門到精通(實戰秘笈版)
- Dynamics 365 for Finance and Operations Development Cookbook(Fourth Edition)
- 軟件項目管理(第2版)
- Java Web及其框架技術
- 青少年美育趣味課堂:XMind思維導圖制作
- Mastering ServiceNow(Second Edition)
- Create React App 2 Quick Start Guide
- MySQL程序員面試筆試寶典
- Getting Started with Polymer
- Android Studio Cookbook
- 算法設計與分析:基于C++編程語言的描述
- Learning Python Data Visualization
- C++程序設計教程
- 零基礎入門Python數據分析與機器學習