- Neural Networks with R
- Giuseppe Ciaburro Balaji Venkateswaran
- 151字
- 2021-08-20 10:25:17
Activation functions
The abstraction of the processing of neural networks is mainly achieved through the activation functions. An activation function is a mathematical function which converts the input to an output, and adds the magic of neural network processing. Without activation functions, the working of neural networks will be like linear functions. A linear function is one where the output is directly proportional to input, for example:


A linear function is a polynomial of one degree. Simply, it is a straight line without any curves.
However, most of the problems the neural networks try to solve are nonlinear and complex in nature. To achieve the nonlinearity, the activation functions are used. Nonlinear functions are high degree polynomial functions, for example:


The graph of a nonlinear function is curved and adds the complexity factor.
Activation functions give the nonlinearity property to neural networks and make them true universal function approximators.
- JavaScript百煉成仙
- 玩轉(zhuǎn)Scratch少兒趣味編程
- Intel Galileo Essentials
- 垃圾回收的算法與實(shí)現(xiàn)
- Learning SAP Analytics Cloud
- 差分進(jìn)化算法及其高維多目標(biāo)優(yōu)化應(yīng)用
- 碼上行動(dòng):用ChatGPT學(xué)會(huì)Python編程
- Android底層接口與驅(qū)動(dòng)開發(fā)技術(shù)詳解
- Scala Data Analysis Cookbook
- PHP編程基礎(chǔ)與實(shí)踐教程
- Fast Data Processing with Spark(Second Edition)
- 深入淺出 HTTPS:從原理到實(shí)戰(zhàn)
- SCRATCH編程課:我的游戲我做主
- Mastering Object:Oriented Python(Second Edition)
- Mastering Python