- Hands-On Natural Language Processing with Python
- Rajesh Arumugam Rajalingappaa Shanmugamani
- 76字
- 2021-08-13 16:01:48
Stochastic gradient descent
Stochastic gradient descent is a variation of the gradient descent algorithm to train deep learning models. The basic idea is that instead of training the whole set of the data, a subset is utilized. Theoretically, one sample is good enough for training the network. But in practice, a fixed number of the input data or a batch is usually used. This approach results in faster training, as compared to the vanilla gradient descent.
推薦閱讀
- 手機(jī)安全和可信應(yīng)用開發(fā)指南:TrustZone與OP-TEE技術(shù)詳解
- HornetQ Messaging Developer’s Guide
- Advanced Machine Learning with Python
- Bootstrap Site Blueprints Volume II
- GraphQL學(xué)習(xí)指南
- OpenCV 3和Qt5計(jì)算機(jī)視覺應(yīng)用開發(fā)
- Quarkus實(shí)踐指南:構(gòu)建新一代的Kubernetes原生Java微服務(wù)
- Effective Python Penetration Testing
- 自制編程語(yǔ)言
- 用戶體驗(yàn)可視化指南
- Processing創(chuàng)意編程指南
- ASP.NET Web API Security Essentials
- 從零開始學(xué)Python大數(shù)據(jù)與量化交易
- 讓Python遇上Office:從編程入門到自動(dòng)化辦公實(shí)踐
- Flutter之旅