官术网_书友最值得收藏!

Understanding the perceptron

First, we need to understand the basics of neural networks. A neural network consists of one or multiple layers of neurons, named after the biological neurons in human brains. We will demonstrate the mechanics of a single neuron by implementing a perceptron. In a perceptron, a single unit (neuron) performs all the computations. Later, we will scale the number of units to create deep neural networks:

Figure 2.1: Perceptron

A perceptron can have multiple inputs. On these inputs, the unit performs some computations and outputs a single value, for example a binary value to classify two classes. The computations performed by the unit are a simple matrix multiplication of the input and the weights. The resulting values are summed up and a bias is added:

These computations can easily be scaled to high dimensional input. An activation function (φ) determines the final output of the perceptron in the forward pass:

The weights and bias are randomly initialized. After each epoch (iteration over the training data), the weights are updated based on the difference between the output and the desired output (error) multiplied by the learning rate. As a consequence, the weights will be updated towards the training data (backward pass) and the accuracy of the output will improve. Basically, the perceptron is a linear combination optimized on the training data. As an activation function we will use a unit step function: if the output is above a certain threshold the output will be activated (hence a 0 versus 1 binary classifier). A perceptron is able to classify classes with 100% accuracy if the classes are linearly separable. In the next recipe, we will show you how to implement a perceptron with NumPy. 

主站蜘蛛池模板: 溧阳市| 临洮县| 溧阳市| 时尚| 南康市| 呈贡县| 子长县| 城步| 寿宁县| 福建省| 淮阳县| 淮阳县| 夹江县| 安宁市| 兰州市| 水富县| 夹江县| 三台县| 于田县| 高阳县| 肃北| 锦州市| 绥滨县| 永靖县| 平昌县| 翁源县| 仙桃市| 海兴县| 百色市| 杭州市| 石河子市| 周口市| 安塞县| 德化县| 靖州| 巴东县| 曲阜市| 珲春市| 广州市| 沿河| 阜南县|