官术网_书友最值得收藏!

Getting started with activation functions

If we only use linear activation functions, a neural network would represent a large collection of linear combinations. However, the power of neural networks lies in their ability to model complex nonlinear behavior. We briefly introduced the non-linear activation functions sigmoid and ReLU in the previous recipes, and there are many more popular nonlinear activation functions, such as ELULeaky ReLU, TanH, and Maxout.

There is no general rule as to which activation works best for the hidden units. Deep learning is a relatively new field and most results are obtained by trial and error instead of mathematical proofs. For the output unit, we use a single output unit and a linear activation function for regression tasks. For classification tasks with n classes, we use n output nodes and a softmax activation function. The softmax function forces the network to output probabilities between 0 and 1 for mutually exclusive classes and the probabilities sum up to 1. For binary classification, we can also use a single output node and a sigmoid activation function to output probabilities. 

Choosing the correct activation function for the hidden units can be crucial. In the backward pass, the updates are dependent on the derivative of the activation function. For deep neural networks, the gradients of the updated weights can go to zero in the first couple of layers (also known as the vanishing gradients problem) or can grow exponentially big (also known as the exploding gradients problem). This holds especially when activation functions have a derivative that only takes on small values (for example the sigmoid activation function) or activation functions that have a derivative that can take values that are larger than 1.

Activation functions such as the ReLU prevents such cases. The ReLU has a derivative of 1 when the output is positive and is 0 otherwise. When using a ReLU activation function, a sparse network is generated with a relatively small number of activated connections. The loss that is passed through the network seems more useful in such cases. In some cases, the ReLU causes too many of the neurons to die; in such cases, you should try a variant such as Leaky ReLU. In our next recipe, we will compare the difference in results between a sigmoid and a ReLU activation function when classifying handwritten digits with a deep FNN. 

主站蜘蛛池模板: 西乡县| 蒙城县| 霍邱县| 丰原市| 密山市| 洛宁县| 枞阳县| 滁州市| 大安市| 安乡县| 北海市| 宣恩县| 日照市| 乌海市| 临西县| 金阳县| 金阳县| 茌平县| 山阴县| 鄯善县| 峨眉山市| 日土县| 甘泉县| 诸城市| 英超| 宁南县| 宝应县| 阿瓦提县| 若羌县| 长丰县| 米脂县| 卓资县| 凤冈县| 峨山| 梨树县| 永福县| 蒲城县| 深泽县| 青岛市| 嵊州市| 三穗县|