- Hands-On Deep Learning for Games
- Micheal Lanham
- 230字
- 2021-06-24 15:47:55
Training neural networks with backpropagation
Calculating the activation of a neuron, the forward part, or what we call feed-forward propagation, is quite straightforward to process. The complexity we encounter now is training the errors back through the network. When we train the network now, we start at the last output layer and determine the total error, just as we did with a single perceptron, but now we need to sum up all errors across the output layer. Then we need to use this value to backpropagate the error back through the network, updating each of the weights based on their contribution to the total error. Understanding the contribution of a single weight in a network with thousands or millions of weights could be quite complicated, except thankfully for the help of differentiation and the chain rule. Before we get to the complicated math, we first need to discuss the Cost function and how we calculate errors in the next section.
- 計算機信息技術基礎實驗與習題
- Access 2007數據庫應用上機指導與練習
- 智能數據分析:入門、實戰與平臺構建
- 數據庫設計與應用(SQL Server 2014)(第二版)
- INSTANT Android Fragmentation Management How-to
- Python數據分析與數據化運營
- 數據修復技術與典型實例實戰詳解(第2版)
- Doris實時數倉實戰
- 大數據數學基礎(Python語言描述)
- 計算機視覺
- 離線和實時大數據開發實戰
- 云工作時代:科技進化必將帶來的新工作方式
- Unity for Architectural Visualization
- Learning Ansible
- 社交網站的數據挖掘與分析(原書第2版)