- Python Deep Learning
- Ivan Vasilev Daniel Slater Gianmario Spacagna Peter Roelants Valentino Zocca
- 377字
- 2021-07-02 14:31:08
Training deep networks
As we mentioned in chapter 2, Neural Networks, we can use different algorithms to train a neural network. But in practice, we almost always use Stochastic Gradient Descent (SGD) and backpropagation, which we introduced in Chapter 2, Neural Networks. In a way, this combination has withstood the test of time, outliving other algorithms, such as DBNs. With that said, gradient descent has some extensions worth discussing.
In the following section, we'll introduce momentum, which is an effective improvement over the vanilla gradient descent. You may recall the weight update rule that we introduced in Chapter 2, Neural Networks:
, where λ is the learning rate.
To include momentum, we'll add another parameter to this equation.
- First, we'll calculate the weight update value:

- Then, we'll update the weight:

From the preceding equation, we see that the first component, , is the momentum. The
represents the previous value of the weight update and μ is the coefficient, which will determine how much the new value depends on the previous ones. To explain this, let's look at the following diagram, where you will see a comparison between vanilla SGD and SGD with momentum. The concentric ellipses represent the surface of the error function, where the innermost ellipse is the minimum and the outermost the maximum. Think of the loss function surface as the surface of a hill. Now, imagine that we are holding a ball at the top of the hill (maximum). If we drop the ball, thanks to Earth's gravity, it will start rolling toward the bottom of the hill (minimum). The more distance it travels, the more its speed will increase. In other words, it will gain momentum (hence the name of the optimization). As a result, it will reach the bottom of the hill faster. If, for some reason, gravity didn't exist, the ball would roll at its initial speed and it would reach the bottom more slowly:

In your practice, you may encounter other gradient descent optimizations, such as Nesterov momentum, ADADELTA https://arxiv.org/abs/1212.5701, RMSProp https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf, and Adam https://arxiv.org/abs/1412.6980. Some of these will be discussed in later chapters of the book.
- Learning Cython Programming(Second Edition)
- Ray分布式機器學習:利用Ray進行大模型的數據處理、訓練、推理和部署
- Python高效開發實戰:Django、Tornado、Flask、Twisted(第2版)
- Python Data Analysis(Second Edition)
- Reactive Programming With Java 9
- Android底層接口與驅動開發技術詳解
- 深度學習原理與PyTorch實戰(第2版)
- Python:Deeper Insights into Machine Learning
- Modern C++ Programming Cookbook
- Buildbox 2.x Game Development
- Vue.js應用測試
- 大學計算機基礎
- C語言從入門到精通
- Google Adsense優化實戰
- Building UIs with Wijmo