- Deep Learning Essentials
- Wei Di Anurag Bhardwaj Jianing Wei
- 126字
- 2021-06-30 19:17:52
Leaky ReLU and maxout
A Leaky ReLU will have a small slope α on the negative side, such as 0.01. The slope α can also be made into a parameter of each neuron, such as in PReLU neurons (P stands for parametric). The problem with this activation function is the inconsistency of the effectiveness of such modifications to various problems.
Maxout is another attempt to solve the dead neuron problem in ReLU. It takes the form . From this form, we can see that both ReLU and leaky ReLU are just special cases of this form, that is, for ReLU, it's
. Although it benefits from linearity and having no saturation, it has doubled the number of parameters for every single neuron.
推薦閱讀
- 腦動力:Linux指令速查效率手冊
- Hands-On Internet of Things with MQTT
- Mastercam 2017數控加工自動編程經典實例(第4版)
- Circos Data Visualization How-to
- Ansible Quick Start Guide
- 實時流計算系統設計與實現
- Julia 1.0 Programming
- 腦動力:PHP函數速查效率手冊
- Mobile DevOps
- PHP開發手冊
- Windows游戲程序設計基礎
- 數據掘金
- Enterprise PowerShell Scripting Bootcamp
- Deep Reinforcement Learning Hands-On
- 三菱FX/Q系列PLC工程實例詳解