官术网_书友最值得收藏!

ReLU activation

Rectified linear, or as it is more commonly known, ReLU function is the most widely used activation function in deep learning models. It suppresses the negative values to zero. The reason for ReLU being so widely used is it deactivates the neurons that produce negative values. This kind of behavior is desired in most of the networks containing thousands of neurons. Following, is the plot for the ReLU activation function:

A modified form of ReLU is leaky ReLU. ReLU completely deactivates the neuron with a negative value. Instead of completely deactivating the neuron, leaky ReLU reduces the effect of those neurons by a factor of, say c. The following equation defines the leaky ReLU activation function:

Following, is the plot of output values from ReLU activation function:

主站蜘蛛池模板: 西丰县| 广饶县| 海兴县| 银川市| 教育| 濮阳市| 广河县| 裕民县| 林西县| 宿州市| 静安区| 郎溪县| 梁山县| 扶风县| 库尔勒市| 磐石市| 滨海县| 镇江市| 青浦区| 阿克| 道孚县| 新泰市| 灵寿县| 镇坪县| 铁力市| 达州市| 塘沽区| 梁平县| 稷山县| 石屏县| 安徽省| 崇信县| 嘉禾县| 阜康市| 建始县| 兴和县| 商水县| 江孜县| 吉林省| 曲靖市| 定襄县|