官术网_书友最值得收藏!

How to do it...

This section covers type activation functions in multilayer perceptrons. Activation is one of the critical component of ANN as it defines the output of that node based on the given input. There are many different activation functions used while building a neural network:

  • Sigmoid: The sigmoid activation function is a continuous function also known as a logistic function and has the form, 1/(1+exp(-x)). The sigmoid function has a tendency to zero out the backpropagation terms during training leading to saturation in response. In TensorFlow, the sigmoid activation function is defined using the tf.nn.sigmoid function.
  • ReLU: Rectified linear unit (ReLU) is one of the most famous continuous, but not smooth, activation functions used in neural networks to capture non-linearity. The ReLU function is defined as max(0,x). In TensorFlow, the ReLU activation function is defined as tf.nn.relu.
  • ReLU6: It caps the ReLU function at 6 and is defined as min(max(0,x), 6), thus the value does not become very small or large. The function is defined in TensorFlow as tf.nn.relu6.
  • tanh: Hypertangent is another smooth function used as an activation function in neural networks and is bound [ -1 to 1] and implemented as tf.nn.tanh.
  • softplus: It is a continuous version of ReLU, so the differential exists and is defined as log(exp(x)+1). In TensorFlow the softplus is defined as tf.nn.softplus.
主站蜘蛛池模板: 南靖县| 九寨沟县| 门源| 漠河县| 望城县| 许昌市| 贵州省| 江口县| 花垣县| 克什克腾旗| 乃东县| 新宾| 岚皋县| 西昌市| 信丰县| 洮南市| 海原县| 固始县| 永济市| 贵德县| 克拉玛依市| 乐昌市| 卢氏县| 普兰县| 定州市| 百色市| 陇南市| 逊克县| 惠东县| 珲春市| 内江市| 梁平县| 临洮县| 雷波县| 邵东县| 会理县| 巴南区| 门头沟区| 介休市| 台南县| 兴文县|