- Neural Networks with R
- Giuseppe Ciaburro Balaji Venkateswaran
- 151字
- 2021-08-20 10:25:17
Activation functions
The abstraction of the processing of neural networks is mainly achieved through the activation functions. An activation function is a mathematical function which converts the input to an output, and adds the magic of neural network processing. Without activation functions, the working of neural networks will be like linear functions. A linear function is one where the output is directly proportional to input, for example:


A linear function is a polynomial of one degree. Simply, it is a straight line without any curves.
However, most of the problems the neural networks try to solve are nonlinear and complex in nature. To achieve the nonlinearity, the activation functions are used. Nonlinear functions are high degree polynomial functions, for example:


The graph of a nonlinear function is curved and adds the complexity factor.
Activation functions give the nonlinearity property to neural networks and make them true universal function approximators.
- Getting Started with Gulp(Second Edition)
- Java技術(shù)手冊(cè)(原書第7版)
- Rust Essentials(Second Edition)
- Windows Phone 7.5:Building Location-aware Applications
- Java高并發(fā)核心編程(卷1):NIO、Netty、Redis、ZooKeeper
- Python爬蟲、數(shù)據(jù)分析與可視化:工具詳解與案例實(shí)戰(zhàn)
- Java零基礎(chǔ)實(shí)戰(zhàn)
- Maker基地嘉年華:玩轉(zhuǎn)樂動(dòng)魔盒學(xué)Scratch
- Struts 2.x權(quán)威指南
- Android Development Tools for Eclipse
- 深入解析Java編譯器:源碼剖析與實(shí)例詳解
- Java Web開發(fā)教程:基于Struts2+Hibernate+Spring
- ASP.NET jQuery Cookbook(Second Edition)
- Access 2016數(shù)據(jù)庫應(yīng)用與開發(fā):實(shí)戰(zhàn)從入門到精通(視頻教學(xué)版)
- Java基礎(chǔ)案例教程(第2版)