- Hands-On Meta Learning with Python
- Sudharsan Ravichandiran
- 97字
- 2021-07-02 14:29:16
Learning the optimizer
In this method, we try to learn the optimizer. How do we generally optimize our neural network? We optimize our neural network by training on a large dataset and minimize the loss using gradient descent. But in the few-shot learning setting, gradient descent fails as we will have a smaller dataset. So, in this case, we will learn the optimizer itself. We will have two networks: a base network that actually tries to learn and a meta network that optimizes the base network. We will explore how exactly this works in the upcoming sections.
推薦閱讀
- Hands-On Machine Learning with Microsoft Excel 2019
- 劍破冰山:Oracle開發(fā)藝術(shù)
- Python數(shù)據(jù)分析:基于Plotly的動(dòng)態(tài)可視化繪圖
- 區(qū)塊鏈:看得見的信任
- Oracle高性能自動(dòng)化運(yùn)維
- 數(shù)據(jù)驅(qū)動(dòng)設(shè)計(jì):A/B測(cè)試提升用戶體驗(yàn)
- 數(shù)據(jù)庫(kù)技術(shù)實(shí)用教程
- IPython Interactive Computing and Visualization Cookbook(Second Edition)
- Oracle 11g+ASP.NET數(shù)據(jù)庫(kù)系統(tǒng)開發(fā)案例教程
- Access 2010數(shù)據(jù)庫(kù)程序設(shè)計(jì)實(shí)踐教程
- 數(shù)據(jù)挖掘與機(jī)器學(xué)習(xí)-WEKA應(yīng)用技術(shù)與實(shí)踐(第二版)
- Oracle 11g數(shù)據(jù)庫(kù)管理員指南
- 一本書讀懂大數(shù)據(jù)
- 基于數(shù)據(jù)發(fā)布的隱私保護(hù)模型研究
- GameMaker Game Programming with GML