- Intelligent Projects Using Python
- Santanu Pattanayak
- 243字
- 2021-07-02 14:10:42
Rectified linear unit (ReLU)
The output of a ReLU is linear when the total input to the neuron is greater than zero, and the output is zero when the total input to the neuron is negative. This simple activation function provides nonlinearity to a neural network, and, at the same time, it provides a constant gradient of one with respect to the total input. This constant gradient helps to keep the neural network from developing saturating or vanishing gradient problems, as seen in activation functions, such as sigmoid and tanh activation units. The ReLU function output (as shown in Figure 1.8) can be expressed as follows:

The ReLU activation function can be plotted as follows:

One of the constraints for ReLU is its zero gradients for negative values of input. This may slow down the training, especially at the initial phase. Leaky ReLU activation functions (as shown in Figure 1.9) can be useful in this scenario, where the output and gradients are nonzero, even for negative values of the input. A leaky ReLU output function can be expressed as follows:

The parameter is to be provided for leaky ReLU activation functions, whereas for a parametric ReLU,
is a parameter that the neural network will learn through training. The following graph shows the output of the leaky ReLU activation function:

- 觸摸屏實用技術與工程應用
- INSTANT ForgedUI Starter
- 微服務分布式架構基礎與實戰:基于Spring Boot + Spring Cloud
- 計算機組裝與維護(第3版)
- OpenGL Game Development By Example
- Machine Learning with Go Quick Start Guide
- VMware Workstation:No Experience Necessary
- 微型計算機系統原理及應用:國產龍芯處理器的軟件和硬件集成(基礎篇)
- 電腦橫機使用與維修
- 單片微機原理及應用
- Spring Security 3.x Cookbook
- 觸摸屏應用技術從入門到精通
- Blender 3D By Example
- The Deep Learning with PyTorch Workshop
- 創客電子:Arduino和Raspberry Pi智能制作項目精選