官术网_书友最值得收藏!

Getting started with activation functions

If we only use linear activation functions, a neural network would represent a large collection of linear combinations. However, the power of neural networks lies in their ability to model complex nonlinear behavior. We briefly introduced the non-linear activation functions sigmoid and ReLU in the previous recipes, and there are many more popular nonlinear activation functions, such as ELULeaky ReLU, TanH, and Maxout.

There is no general rule as to which activation works best for the hidden units. Deep learning is a relatively new field and most results are obtained by trial and error instead of mathematical proofs. For the output unit, we use a single output unit and a linear activation function for regression tasks. For classification tasks with n classes, we use n output nodes and a softmax activation function. The softmax function forces the network to output probabilities between 0 and 1 for mutually exclusive classes and the probabilities sum up to 1. For binary classification, we can also use a single output node and a sigmoid activation function to output probabilities. 

Choosing the correct activation function for the hidden units can be crucial. In the backward pass, the updates are dependent on the derivative of the activation function. For deep neural networks, the gradients of the updated weights can go to zero in the first couple of layers (also known as the vanishing gradients problem) or can grow exponentially big (also known as the exploding gradients problem). This holds especially when activation functions have a derivative that only takes on small values (for example the sigmoid activation function) or activation functions that have a derivative that can take values that are larger than 1.

Activation functions such as the ReLU prevents such cases. The ReLU has a derivative of 1 when the output is positive and is 0 otherwise. When using a ReLU activation function, a sparse network is generated with a relatively small number of activated connections. The loss that is passed through the network seems more useful in such cases. In some cases, the ReLU causes too many of the neurons to die; in such cases, you should try a variant such as Leaky ReLU. In our next recipe, we will compare the difference in results between a sigmoid and a ReLU activation function when classifying handwritten digits with a deep FNN. 

主站蜘蛛池模板: 固镇县| 广东省| 安丘市| 郯城县| 德江县| 罗定市| 江山市| 遂溪县| 宜章县| 依兰县| 进贤县| 宁国市| 甘孜县| 阿图什市| 金华市| 曲沃县| 鄂伦春自治旗| 远安县| 溧水县| 石渠县| 汨罗市| 禹城市| 收藏| 庆城县| 榆树市| 三都| 池州市| 易门县| 凉城县| 保德县| 伊宁市| 桓台县| 曲麻莱县| 博罗县| 尤溪县| 山丹县| 无棣县| 古浪县| 柘城县| 重庆市| 邵东县|