- Reinforcement Learning with TensorFlow
- Sayon Dutta
- 105字
- 2021-08-27 18:51:52
How to choose the right activation function
The activation function is decided depending upon the objective of the problem statement and the concerned properties. Some of the inferences are as follows:
Sigmoid functions work very well in the case of shallow networks and binary classifiers. Deeper networks may lead to vanishing gradients.
The ReLU function is the most widely used, and try using Leaky ReLU to avoid the case of dead neurons. Thus, start with ReLU, then move to another activation function if ReLU doesn't provide good results.
Use softmax in the outer layer for the multi-class classification.
Avoid using ReLU in the outer layer.
推薦閱讀
- 課課通計算機原理
- Visual Basic從初學到精通
- 西門子S7-200 SMART PLC實例指導學與用
- Spark大數據技術與應用
- 貫通Java Web開發三劍客
- 突破,Objective-C開發速學手冊
- Statistics for Data Science
- 奇點將至
- 手機游戲策劃設計
- 基于人工免疫原理的檢測系統模型及其應用
- Photoshop CS4數碼照片處理入門、進階與提高
- 網站規劃與網頁設計
- Getting Started with Tableau 2018.x
- Practical Network Automation
- Raspberry Pi 3 Projects for Java Programmers