- Hands-On Natural Language Processing with Python
- Rajesh Arumugam Rajalingappaa Shanmugamani
- 127字
- 2021-08-13 16:01:48
Gradient descent
Gradient descent is an optimization technique that utilizes the gradients computed from backpropagation to update the weights and biases, moving towards the goal of minimizing the loss. As shown in the following diagram, the cost (or loss) function is minimized by adjusting the weights, along the slope or gradient of the function:

For a simple perceptron, this cost function is linear, with respect to the weights. But for deep neural networks, the cost function is most often high-dimensional and non-linear. As gradient descent has to traverse paths along all of the different dimensions, it may be difficult to arrive at the global minimum in an acceptable time. To avoid this problem and train faster, neural networks normally employ stochastic gradient descent, which is explained next.
- GeoServer Cookbook
- Microsoft Application Virtualization Cookbook
- Vue.js 2 and Bootstrap 4 Web Development
- 深入淺出DPDK
- Building Serverless Applications with Python
- 表哥的Access入門:以Excel視角快速學(xué)習(xí)數(shù)據(jù)庫開發(fā)(第2版)
- 零基礎(chǔ)輕松學(xué)SQL Server 2016
- ServiceNow:Building Powerful Workflows
- GameMaker Essentials
- Django 3.0入門與實(shí)踐
- Creating Data Stories with Tableau Public
- ASP.NET求職寶典
- Mastering Concurrency in Python
- Java程序設(shè)計(jì)教程
- Monitoring Docker