- Hands-On Meta Learning with Python
- Sudharsan Ravichandiran
- 118字
- 2021-07-02 14:29:16
Learning the initializations
In this method, we try to learn optimal initial parameter values. What do we mean by that? Let's say we are a building a neural network to classify images. First, we initialize random weights, calculate loss, and minimize the loss through a gradient descent. So, we will find the optimal weights through gradient descent and minimize the loss. Instead of initializing the weights randomly, if can we initialize the weights with optimal values or close to optimal values, then we can attain the convergence faster and we can learn very quickly. We will see how exactly we can find these optimal initial weights with algorithms such as MAML, Reptile, and Meta-SGD in the upcoming chapters.
推薦閱讀
- 數(shù)據(jù)要素安全流通
- 漫話大數(shù)據(jù)
- Unity 5.x Game AI Programming Cookbook
- MongoDB管理與開發(fā)精要
- 數(shù)據(jù)驅(qū)動設(shè)計(jì):A/B測試提升用戶體驗(yàn)
- 數(shù)據(jù)架構(gòu)與商業(yè)智能
- 數(shù)據(jù)庫原理與應(yīng)用(Oracle版)
- 企業(yè)級數(shù)據(jù)與AI項(xiàng)目成功之道
- Lego Mindstorms EV3 Essentials
- 數(shù)據(jù)中心數(shù)字孿生應(yīng)用實(shí)踐
- 數(shù)據(jù)科學(xué)實(shí)戰(zhàn)指南
- 編寫有效用例
- 一本書講透Elasticsearch:原理、進(jìn)階與工程實(shí)踐
- 數(shù)據(jù)庫應(yīng)用系統(tǒng)開發(fā)實(shí)例
- 數(shù)據(jù)中心經(jīng)營之道