- Deep Learning with PyTorch
- Vishnu Subramanian
- 138字
- 2021-06-24 19:16:27
Sigmoid
The sigmoid activation function has a simple mathematical form, as follows:
The sigmoid function intuitively takes a real-valued number and outputs a number in a range between zero and one. For a large negative number, it returns close to zero and, for a large positive number, it returns close to one. The following plot represents different sigmoid function outputs:

The sigmoid function has been historically used across different architectures, but in recent times it has gone out of popularity as it has one major drawback. When the output of the sigmoid function is close to zero or one, the gradients for the layers before the sigmoid function are close to zero and, hence, the learnable parameters of the previous layer get gradients close to zero and the weights do not get adjusted often, resulting in dead neurons.
- Learning Cocos2d-x Game Development
- Istio入門與實戰
- 龍芯應用開發標準教程
- Mastering Delphi Programming:A Complete Reference Guide
- Manage Partitions with GParted How-to
- 計算機維修與維護技術速成
- 電腦維護365問
- 電腦軟硬件維修從入門到精通
- 筆記本電腦使用、維護與故障排除從入門到精通(第5版)
- Intel Edison智能硬件開發指南:基于Yocto Project
- 微型計算機系統原理及應用:國產龍芯處理器的軟件和硬件集成(基礎篇)
- STM32自學筆記
- 微控制器的應用
- 計算機應用基礎案例教程(Windows 7+Office 2010)
- 零基礎輕松學修電腦主板