官术网_书友最值得收藏!

How to do it...

This section covers type activation functions in multilayer perceptrons. Activation is one of the critical component of ANN as it defines the output of that node based on the given input. There are many different activation functions used while building a neural network:

  • Sigmoid: The sigmoid activation function is a continuous function also known as a logistic function and has the form, 1/(1+exp(-x)). The sigmoid function has a tendency to zero out the backpropagation terms during training leading to saturation in response. In TensorFlow, the sigmoid activation function is defined using the tf.nn.sigmoid function.
  • ReLU: Rectified linear unit (ReLU) is one of the most famous continuous, but not smooth, activation functions used in neural networks to capture non-linearity. The ReLU function is defined as max(0,x). In TensorFlow, the ReLU activation function is defined as tf.nn.relu.
  • ReLU6: It caps the ReLU function at 6 and is defined as min(max(0,x), 6), thus the value does not become very small or large. The function is defined in TensorFlow as tf.nn.relu6.
  • tanh: Hypertangent is another smooth function used as an activation function in neural networks and is bound [ -1 to 1] and implemented as tf.nn.tanh.
  • softplus: It is a continuous version of ReLU, so the differential exists and is defined as log(exp(x)+1). In TensorFlow the softplus is defined as tf.nn.softplus.
主站蜘蛛池模板: 乌审旗| 伊川县| 宜城市| 桐城市| 夹江县| 九龙城区| 桐庐县| 黑水县| 黄龙县| 酉阳| 冀州市| 崇文区| 龙岩市| 贵溪市| 浙江省| 中宁县| 汕尾市| 教育| 饶河县| 根河市| 新津县| 谷城县| 岐山县| 乌拉特前旗| 收藏| 舞阳县| 花莲县| 钦州市| 方山县| 广南县| 乌拉特后旗| 儋州市| 左权县| 高雄市| 普兰店市| 恩施市| 新竹市| 嘉定区| 中方县| 清新县| 永寿县|