- Hands-On Deep Learning Architectures with Python
- Yuxi (Hayden) Liu Saransh Mehta
- 131字
- 2021-06-24 14:48:12
ReLU activation
Rectified linear, or as it is more commonly known, ReLU function is the most widely used activation function in deep learning models. It suppresses the negative values to zero. The reason for ReLU being so widely used is it deactivates the neurons that produce negative values. This kind of behavior is desired in most of the networks containing thousands of neurons. Following, is the plot for the ReLU activation function:


A modified form of ReLU is leaky ReLU. ReLU completely deactivates the neuron with a negative value. Instead of completely deactivating the neuron, leaky ReLU reduces the effect of those neurons by a factor of, say c. The following equation defines the leaky ReLU activation function:

Following, is the plot of output values from ReLU activation function:

- 腦動力:C語言函數速查效率手冊
- Go Machine Learning Projects
- 返璞歸真:UNIX技術內幕
- Apache Hive Essentials
- 西門子S7-200 SMART PLC實例指導學與用
- 數據挖掘方法及天體光譜挖掘技術
- 基于ARM 32位高速嵌入式微控制器
- Cloudera Administration Handbook
- 云計算和大數據的應用
- 單片機技術項目化原理與實訓
- AMK伺服控制系統原理及應用
- Microsoft Dynamics CRM 2013 Marketing Automation
- 案例解說Delphi典型控制應用
- Effective Business Intelligence with QuickSight
- Mastering MongoDB 4.x