- Neural Network Programming with TensorFlow
- Manpreet Singh Ghotra Rajdeep Dua
- 148字
- 2021-07-02 15:17:11
Understanding backpropagation
When a feedforward neural network is used to accept an input x and produce an output y?, information flows forward through the network elements. The input x provides the information that then propagates up to the hidden units at each layer and produces y?. This is called forward propagation. During training, forward propagation continues onward until it produces a scalar cost J(θ). The backpropagation algorithm, often called backprop, allows the information from the cost to then flow backward through the network in order to compute the gradient.
Computing an analytical expression for the gradient is straightforward, but numerically evaluating such an expression can be computationally expensive. The backpropagation algorithm does so using a simple and inexpensive procedure.
- Unity 5.x Game AI Programming Cookbook
- Mastering Ninject for Dependency Injection
- Libgdx Cross/platform Game Development Cookbook
- Hadoop與大數(shù)據(jù)挖掘(第2版)
- Enterprise Integration with WSO2 ESB
- Mockito Cookbook
- 數(shù)據(jù)驅(qū)動(dòng):從方法到實(shí)踐
- 數(shù)亦有道:Python數(shù)據(jù)科學(xué)指南
- 高維數(shù)據(jù)分析預(yù)處理技術(shù)
- 數(shù)據(jù)庫原理與應(yīng)用
- 數(shù)據(jù)分析思維:產(chǎn)品經(jīng)理的成長筆記
- Unity Game Development Blueprints
- Microsoft Dynamics NAV 2015 Professional Reporting
- Python金融數(shù)據(jù)挖掘與分析實(shí)戰(zhàn)
- C# 7 and .NET Core 2.0 High Performance