- Hands-On Meta Learning with Python
- Sudharsan Ravichandiran
- 97字
- 2021-07-02 14:29:16
Learning the optimizer
In this method, we try to learn the optimizer. How do we generally optimize our neural network? We optimize our neural network by training on a large dataset and minimize the loss using gradient descent. But in the few-shot learning setting, gradient descent fails as we will have a smaller dataset. So, in this case, we will learn the optimizer itself. We will have two networks: a base network that actually tries to learn and a meta network that optimizes the base network. We will explore how exactly this works in the upcoming sections.
推薦閱讀
- 數據挖掘原理與實踐
- Python廣告數據挖掘與分析實戰
- Live Longer with AI
- 算法與數據中臺:基于Google、Facebook與微博實踐
- Remote Usability Testing
- 基于Apache CXF構建SOA應用
- 探索新型智庫發展之路:藍迪國際智庫報告·2015(下冊)
- INSTANT Android Fragmentation Management How-to
- Google Cloud Platform for Developers
- Unity 2018 By Example(Second Edition)
- Spring MVC Beginner’s Guide
- 大數據分析:R基礎及應用
- Oracle 內核技術揭密
- 數據之美:一本書學會可視化設計
- 代碼的未來