官术网_书友最值得收藏!

  • R Deep Learning Cookbook
  • Dr. PKS Prakash Achyutuni Sri Krishna Rao
  • 221字
  • 2021-07-02 20:49:14

How to do it...

This section covers type activation functions in multilayer perceptrons. Activation is one of the critical component of ANN as it defines the output of that node based on the given input. There are many different activation functions used while building a neural network:

  • Sigmoid: The sigmoid activation function is a continuous function also known as a logistic function and has the form, 1/(1+exp(-x)). The sigmoid function has a tendency to zero out the backpropagation terms during training leading to saturation in response. In TensorFlow, the sigmoid activation function is defined using the tf.nn.sigmoid function.
  • ReLU: Rectified linear unit (ReLU) is one of the most famous continuous, but not smooth, activation functions used in neural networks to capture non-linearity. The ReLU function is defined as max(0,x). In TensorFlow, the ReLU activation function is defined as tf.nn.relu.
  • ReLU6: It caps the ReLU function at 6 and is defined as min(max(0,x), 6), thus the value does not become very small or large. The function is defined in TensorFlow as tf.nn.relu6.
  • tanh: Hypertangent is another smooth function used as an activation function in neural networks and is bound [ -1 to 1] and implemented as tf.nn.tanh.
  • softplus: It is a continuous version of ReLU, so the differential exists and is defined as log(exp(x)+1). In TensorFlow the softplus is defined as tf.nn.softplus.
主站蜘蛛池模板: 鄢陵县| 定安县| 敦煌市| 西城区| 长白| 蓬安县| 仙居县| 工布江达县| 古蔺县| 乐平市| 凤庆县| 云霄县| 康乐县| 榆树市| 汉寿县| 靖江市| 富阳市| 广宗县| 抚宁县| 景谷| 鄂伦春自治旗| 岗巴县| 连山| 阿拉尔市| 哈尔滨市| 福鼎市| 和龙市| 贵州省| 墨竹工卡县| 新民市| 诏安县| 鹿泉市| 泰来县| 河曲县| 托里县| 南阳市| 车险| 陇川县| 三穗县| 岱山县| 安西县|