- Intelligent Projects Using Python
- Santanu Pattanayak
- 243字
- 2021-07-02 14:10:42
Rectified linear unit (ReLU)
The output of a ReLU is linear when the total input to the neuron is greater than zero, and the output is zero when the total input to the neuron is negative. This simple activation function provides nonlinearity to a neural network, and, at the same time, it provides a constant gradient of one with respect to the total input. This constant gradient helps to keep the neural network from developing saturating or vanishing gradient problems, as seen in activation functions, such as sigmoid and tanh activation units. The ReLU function output (as shown in Figure 1.8) can be expressed as follows:

The ReLU activation function can be plotted as follows:

One of the constraints for ReLU is its zero gradients for negative values of input. This may slow down the training, especially at the initial phase. Leaky ReLU activation functions (as shown in Figure 1.9) can be useful in this scenario, where the output and gradients are nonzero, even for negative values of the input. A leaky ReLU output function can be expressed as follows:

The parameter is to be provided for leaky ReLU activation functions, whereas for a parametric ReLU,
is a parameter that the neural network will learn through training. The following graph shows the output of the leaky ReLU activation function:

- 用“芯”探核:龍芯派開發實戰
- Learning AngularJS Animations
- FPGA從入門到精通(實戰篇)
- Effective STL中文版:50條有效使用STL的經驗(雙色)
- Deep Learning with PyTorch
- The Applied AI and Natural Language Processing Workshop
- 現代辦公設備使用與維護
- Mastering Adobe Photoshop Elements
- 計算機組裝維修與外設配置(高等職業院校教改示范教材·計算機系列)
- Hands-On Artificial Intelligence for Banking
- “硬”核:硬件產品成功密碼
- 單片機技術及應用
- 筆記本電腦芯片級維修從入門到精通(圖解版)
- Spring Cloud實戰
- 單片機項目設計教程