- Hands-On Natural Language Processing with Python
- Rajesh Arumugam Rajalingappaa Shanmugamani
- 127字
- 2021-08-13 16:01:48
Gradient descent
Gradient descent is an optimization technique that utilizes the gradients computed from backpropagation to update the weights and biases, moving towards the goal of minimizing the loss. As shown in the following diagram, the cost (or loss) function is minimized by adjusting the weights, along the slope or gradient of the function:

For a simple perceptron, this cost function is linear, with respect to the weights. But for deep neural networks, the cost function is most often high-dimensional and non-linear. As gradient descent has to traverse paths along all of the different dimensions, it may be difficult to arrive at the global minimum in an acceptable time. To avoid this problem and train faster, neural networks normally employ stochastic gradient descent, which is explained next.
- Modular Programming with Python
- Java EE 6 企業級應用開發教程
- 計算機圖形學編程(使用OpenGL和C++)(第2版)
- R語言游戲數據分析與挖掘
- Extreme C
- 代碼閱讀
- OpenCV with Python Blueprints
- IoT Projects with Bluetooth Low Energy
- Penetration Testing with the Bash shell
- 邊玩邊學Scratch3.0少兒趣味編程
- Spring Boot學習指南:構建云原生Java和Kotlin應用程序
- Microsoft Azure Security
- Python網絡運維自動化
- 數據分析從入門到進階
- 深入淺出Python機器學習