官术网_书友最值得收藏!

Getting started with activation functions

If we only use linear activation functions, a neural network would represent a large collection of linear combinations. However, the power of neural networks lies in their ability to model complex nonlinear behavior. We briefly introduced the non-linear activation functions sigmoid and ReLU in the previous recipes, and there are many more popular nonlinear activation functions, such as ELULeaky ReLU, TanH, and Maxout.

There is no general rule as to which activation works best for the hidden units. Deep learning is a relatively new field and most results are obtained by trial and error instead of mathematical proofs. For the output unit, we use a single output unit and a linear activation function for regression tasks. For classification tasks with n classes, we use n output nodes and a softmax activation function. The softmax function forces the network to output probabilities between 0 and 1 for mutually exclusive classes and the probabilities sum up to 1. For binary classification, we can also use a single output node and a sigmoid activation function to output probabilities. 

Choosing the correct activation function for the hidden units can be crucial. In the backward pass, the updates are dependent on the derivative of the activation function. For deep neural networks, the gradients of the updated weights can go to zero in the first couple of layers (also known as the vanishing gradients problem) or can grow exponentially big (also known as the exploding gradients problem). This holds especially when activation functions have a derivative that only takes on small values (for example the sigmoid activation function) or activation functions that have a derivative that can take values that are larger than 1.

Activation functions such as the ReLU prevents such cases. The ReLU has a derivative of 1 when the output is positive and is 0 otherwise. When using a ReLU activation function, a sparse network is generated with a relatively small number of activated connections. The loss that is passed through the network seems more useful in such cases. In some cases, the ReLU causes too many of the neurons to die; in such cases, you should try a variant such as Leaky ReLU. In our next recipe, we will compare the difference in results between a sigmoid and a ReLU activation function when classifying handwritten digits with a deep FNN. 

主站蜘蛛池模板: 平和县| 仙桃市| 女性| 大兴区| 武山县| 江北区| 偃师市| 蛟河市| 海南省| 榆社县| 习水县| 黑山县| 蓬溪县| 襄垣县| 小金县| 嘉黎县| 河北区| 金平| 彭泽县| 胶南市| 巴林右旗| 淳化县| 绍兴市| 新乡县| 梁平县| 通州区| 贵州省| 方城县| 长葛市| 登封市| 莱阳市| 会同县| 东莞市| 廉江市| 池州市| 崇明县| 定兴县| 会同县| 新宾| 葫芦岛市| 保靖县|