- Python Deep Learning
- Ivan Vasilev Daniel Slater Gianmario Spacagna Peter Roelants Valentino Zocca
- 460字
- 2021-07-02 14:31:04
An introduction to layers
A neural network can have an indefinite number of neurons, which are organized in interconnected layers. The input layer represents the dataset and the initial conditions. For example, if the input is a grayscale image, the output of each neuron in the input layer is the intensity of one pixel of the image. For this very reason, we don't generally count the input layer as a part of the other layers. When we say 1-layer net, we actually mean that it is a simple network with just a single layer, the output, in addition to the input layer.
Unlike the examples we've seen so far, the output layer can have more than one neuron. This is especially useful in classification, where each output neuron represents one class. For example, in the case of the Modified National Institute of Standards and Technology(MNIST) dataset, we'll have 10 output neurons, where each neuron corresponds to a digit from 0-9. In this way, we can use the 1-layer net to classify the digit on each image. We'll determine the digit by taking the output neuron with the highest activation function value. If this is y7 , we'll know that the network thinks that the image shows the number 7.
In the following diagram, you can see the 1-layer feedforward network. In this case, we explicitly show the weights w for each connection between the neurons, but usually, the edges connecting neurons represent the weights implicitly. Weight wij connects the i-th input neuron with the j-th output neuron. The first input, 1, is the bias unit, and the weight, b1, is the bias weight:

In the preceding diagram, we see the 1-layer neural network wherein the neurons on the left represent the input with bias b, the middle column represents the weights for each connection, and the neurons on the right represent the output given the weights w.
The neurons of one-layer can be connected to the neurons of other layers, but not to other neurons of the same layer. In this case, the input neurons are connected only to the output neurons.
But why do we need to organize the neurons in layers in the first place? One argument is that the neuron can convey limited information (just one value). But when we combine the neurons in layers, their outputs compose a vector and, instead of single activation, we can now consider the vector in its entirety. In this way, we can convey a lot more information, not only because the vector has multiple values, but also because the relative ratios between them carry additional information.
- 深入淺出Prometheus:原理、應(yīng)用、源碼與拓展詳解
- Learning Firefox OS Application Development
- 零基礎(chǔ)學(xué)Java(第4版)
- C語言程序設(shè)計
- 數(shù)據(jù)結(jié)構(gòu)與算法分析(C++語言版)
- NoSQL數(shù)據(jù)庫原理
- Odoo 10 Implementation Cookbook
- Java程序設(shè)計與項目案例教程
- 零基礎(chǔ)看圖學(xué)ScratchJr:少兒趣味編程(全彩大字版)
- 數(shù)據(jù)科學(xué)中的實用統(tǒng)計學(xué)(第2版)
- Xamarin Cross-Platform Development Cookbook
- Isomorphic Go
- Python數(shù)據(jù)可視化之matplotlib實踐
- LabVIEW數(shù)據(jù)采集(第2版)
- 秒懂算法:用常識解讀數(shù)據(jù)結(jié)構(gòu)與算法