- Hands-On Meta Learning with Python
- Sudharsan Ravichandiran
- 118字
- 2021-07-02 14:29:16
Learning the initializations
In this method, we try to learn optimal initial parameter values. What do we mean by that? Let's say we are a building a neural network to classify images. First, we initialize random weights, calculate loss, and minimize the loss through a gradient descent. So, we will find the optimal weights through gradient descent and minimize the loss. Instead of initializing the weights randomly, if can we initialize the weights with optimal values or close to optimal values, then we can attain the convergence faster and we can learn very quickly. We will see how exactly we can find these optimal initial weights with algorithms such as MAML, Reptile, and Meta-SGD in the upcoming chapters.
推薦閱讀
- Access 2016數(shù)據(jù)庫教程(微課版·第2版)
- 從零開始學(xué)Hadoop大數(shù)據(jù)分析(視頻教學(xué)版)
- 商業(yè)分析思維與實(shí)踐:用數(shù)據(jù)分析解決商業(yè)問題
- 數(shù)據(jù)結(jié)構(gòu)與算法(C語言版)
- 區(qū)塊鏈:看得見的信任
- Hands-On Mathematics for Deep Learning
- Oracle RAC日記
- 編寫有效用例
- Splunk智能運(yùn)維實(shí)戰(zhàn)
- 云工作時(shí)代:科技進(jìn)化必將帶來的新工作方式
- Rust High Performance
- 一類智能優(yōu)化算法的改進(jìn)及應(yīng)用研究
- 掌中寶:電腦綜合應(yīng)用技巧
- 社交網(wǎng)站的數(shù)據(jù)挖掘與分析(原書第2版)
- Hive性能調(diào)優(yōu)實(shí)戰(zhàn)