官术网_书友最值得收藏!

An introduction to layers

A neural network can have an indefinite number of neurons, which are organized in interconnected layers. The input layer represents the dataset and the initial conditions. For example, if the input is a grayscale image, the output of each neuron in the input layer is the intensity of one pixel of the image. For this very reason, we don't generally count the input layer as a part of the other layers. When we say 1-layer net, we actually mean that it is a simple network with just a single layer, the output, in addition to the input layer.

Unlike the examples we've seen so far, the output layer can have more than one neuron. This is especially useful in classification, where each output neuron represents one class. For example, in the case of the Modified National Institute of Standards and Technology(MNIST) dataset, we'll have 10 output neurons, where each neuron corresponds to a digit from 0-9. In this way, we can use the 1-layer net to classify the digit on each image. We'll determine the digit by taking the output neuron with the highest activation function value. If this is y, we'll know that the network thinks that the image shows the number 7.

In the following diagram, you can see the 1-layer feedforward network. In this case, we explicitly show the weights w for each connection between the neurons, but usually, the edges connecting neurons represent the weights implicitly. Weight wij connects the i-th input neuron with the j-th output neuron. The first input, 1, is the bias unit, and the weight, b1, is the bias weight:

1-layer feedforward network

In the preceding diagram, we see the 1-layer neural network wherein the neurons on the left represent the input with bias b, the middle column represents the weights for each connection, and the neurons on the right represent the output given the weights w.

The neurons of one-layer can be connected to the neurons of other layers, but not to other neurons of the same layer. In this case, the input neurons are connected only to the output neurons.

But why do we need to organize the neurons in layers in the first place? One argument is that the neuron can convey limited information (just one value). But when we combine the neurons in layers, their outputs compose a vector and, instead of single activation, we can now consider the vector in its entirety. In this way, we can convey a lot more information, not only because the vector has multiple values, but also because the relative ratios between them carry additional information.

主站蜘蛛池模板: 武夷山市| 泽库县| 噶尔县| 黄梅县| 湾仔区| 华安县| 江都市| 依兰县| 永康市| 三门县| 方山县| 勃利县| 延津县| 寻乌县| 清河县| 甘德县| 剑川县| 晋宁县| 黔东| 从江县| 平阴县| 尼勒克县| 都昌县| 额济纳旗| 疏勒县| 逊克县| 朝阳县| 双桥区| 资阳市| 巴东县| 广南县| 安阳市| 内江市| 车险| 望奎县| 偏关县| 庆阳市| 屏东市| 南木林县| 日喀则市| 荆州市|