官术网_书友最值得收藏!

The workings of ANNs

We have seen the concept of how a single neuron or perceptron works; so now, let's expand the concept to the idea of deep learning. The following diagram shows us what multiple perceptrons look like:

Fig 2.12: Multiple perceptrons

In the preceding diagram, we can see various layers of single perceptrons connected to each other through their inputs and outputs. The input layer is violet, the hidden layers are blue and green, and the output layer of the network is represented in red.

Input layers are real values from the data, so they take in actual data as their input. The next layers are the hidden layers, which are between the input and output layers. If three or more hidden layers are present, then it's considered a deep neural network. The final layer is the output layer, where we have some sort of final estimation of whatever the output that we are trying to estimate is. As we progress through more layers, the level of abstraction increases. 

In the next section, we will understand an important topic in deep learning—activation functions and their types.

主站蜘蛛池模板: 阜城县| 岗巴县| 浪卡子县| 拜泉县| 宣城市| 大同市| 毕节市| 保靖县| 海盐县| 南丰县| 铜山县| 凤阳县| 容城县| 进贤县| 娄烦县| 孝感市| 沭阳县| 横山县| 扬中市| 宽城| 泾源县| 汾西县| 铜山县| 高安市| 石屏县| 宜兴市| 武平县| 桑日县| 周至县| 南投市| 康平县| 瑞安市| 海阳市| 甘泉县| 中卫市| 顺平县| 磐石市| 交口县| 英吉沙县| 奇台县| 吉隆县|