官术网_书友最值得收藏!

The workings of ANNs

We have seen the concept of how a single neuron or perceptron works; so now, let's expand the concept to the idea of deep learning. The following diagram shows us what multiple perceptrons look like:

Fig 2.12: Multiple perceptrons

In the preceding diagram, we can see various layers of single perceptrons connected to each other through their inputs and outputs. The input layer is violet, the hidden layers are blue and green, and the output layer of the network is represented in red.

Input layers are real values from the data, so they take in actual data as their input. The next layers are the hidden layers, which are between the input and output layers. If three or more hidden layers are present, then it's considered a deep neural network. The final layer is the output layer, where we have some sort of final estimation of whatever the output that we are trying to estimate is. As we progress through more layers, the level of abstraction increases. 

In the next section, we will understand an important topic in deep learning—activation functions and their types.

主站蜘蛛池模板: 化州市| 本溪| 泸水县| 呼图壁县| 永平县| 万安县| 溧阳市| 岐山县| 崇明县| 正安县| 巴林左旗| 花莲县| 南召县| 台南县| 乌兰察布市| 原平市| 简阳市| 开阳县| 尉犁县| 江华| 邮箱| 临朐县| 鲁甸县| 库尔勒市| 原平市| 新民市| 英吉沙县| 陇南市| 屏山县| 南郑县| 三原县| 台江县| 宝鸡市| 石狮市| 黄梅县| 长葛市| 海伦市| 阿拉尔市| 兴义市| 太和县| 静乐县|