- Hands-On Deep Learning Architectures with Python
- Yuxi (Hayden) Liu Saransh Mehta
- 129字
- 2021-06-24 14:48:11
Activation functions
Activation functions are an integral part of any deep learning model. An activation function is a mathematical function that squashes the input values into a certain range. Suppose you feed in a neural network with real number inputs and initialize the weight matrix with random numbers and wish to use the output to classify; that is, you need the output value to be in between zero and one, but your neuron can output any value like -2.2453 or 17854.763. So, there is a need for scaling the output to a specific range. This is what an activation function does:
There are a lot of activation functions depending on the requirements. We will discuss some of the activation functions that are used quite often in deep learning.
推薦閱讀
- 32位嵌入式系統與SoC設計導論
- Splunk 7 Essentials(Third Edition)
- 構建高質量的C#代碼
- TIBCO Spotfire:A Comprehensive Primer(Second Edition)
- 手把手教你玩轉RPA:基于UiPath和Blue Prism
- 反饋系統:多學科視角(原書第2版)
- Mastering Elastic Stack
- 具比例時滯遞歸神經網絡的穩定性及其仿真與應用
- Learn CloudFormation
- 傳感器與新聞
- 從零開始學JavaScript
- 生物3D打印:從醫療輔具制造到細胞打印
- 筆記本電腦使用與維護
- 工廠電氣控制設備
- Learning Couchbase