- Hands-On Deep Learning Architectures with Python
- Yuxi (Hayden) Liu Saransh Mehta
- 131字
- 2021-06-24 14:48:12
ReLU activation
Rectified linear, or as it is more commonly known, ReLU function is the most widely used activation function in deep learning models. It suppresses the negative values to zero. The reason for ReLU being so widely used is it deactivates the neurons that produce negative values. This kind of behavior is desired in most of the networks containing thousands of neurons. Following, is the plot for the ReLU activation function:


A modified form of ReLU is leaky ReLU. ReLU completely deactivates the neuron with a negative value. Instead of completely deactivating the neuron, leaky ReLU reduces the effect of those neurons by a factor of, say c. The following equation defines the leaky ReLU activation function:

Following, is the plot of output values from ReLU activation function:

- Deep Learning Quick Reference
- 軟件架構設計
- OpenStack for Architects
- Matplotlib 3.0 Cookbook
- Maya 2012從入門到精通
- Python Data Science Essentials
- 21天學通C#
- PyTorch Deep Learning Hands-On
- 21天學通Visual C++
- Implementing Splunk 7(Third Edition)
- Python:Data Analytics and Visualization
- Cloud Security Automation
- Serverless Design Patterns and Best Practices
- 電氣自動化工程師自學寶典(基礎篇)
- ASP.NET學習手冊