官术网_书友最值得收藏!

Learning the optimizer

In this method, we try to learn the optimizer. How do we generally optimize our neural network? We optimize our neural network by training on a large dataset and minimize the loss using gradient descent. But in the few-shot learning setting, gradient descent fails as we will have a smaller dataset. So, in this case, we will learn the optimizer itself. We will have two networks: a base network that actually tries to learn and a meta network that optimizes the base network. We will explore how exactly this works in the upcoming sections.

主站蜘蛛池模板: 阿合奇县| 六安市| 赣榆县| 彭州市| 全椒县| 晋江市| 泸州市| 侯马市| 武宣县| 应用必备| 清水县| 鄂托克旗| 手游| 昌黎县| 沈丘县| 沅江市| 太白县| 镇坪县| 图木舒克市| 横山县| 江孜县| 喀什市| 城固县| 调兵山市| 普兰店市| 荔浦县| 桐庐县| 云龙县| 丽水市| 堆龙德庆县| 辽源市| 巴塘县| 马公市| 卫辉市| 岳阳市| 洪江市| 莲花县| 清流县| 砀山县| 屏南县| 建瓯市|