官术网_书友最值得收藏!

Understanding the perceptron

First, we need to understand the basics of neural networks. A neural network consists of one or multiple layers of neurons, named after the biological neurons in human brains. We will demonstrate the mechanics of a single neuron by implementing a perceptron. In a perceptron, a single unit (neuron) performs all the computations. Later, we will scale the number of units to create deep neural networks:

Figure 2.1: Perceptron

A perceptron can have multiple inputs. On these inputs, the unit performs some computations and outputs a single value, for example a binary value to classify two classes. The computations performed by the unit are a simple matrix multiplication of the input and the weights. The resulting values are summed up and a bias is added:

These computations can easily be scaled to high dimensional input. An activation function (φ) determines the final output of the perceptron in the forward pass:

The weights and bias are randomly initialized. After each epoch (iteration over the training data), the weights are updated based on the difference between the output and the desired output (error) multiplied by the learning rate. As a consequence, the weights will be updated towards the training data (backward pass) and the accuracy of the output will improve. Basically, the perceptron is a linear combination optimized on the training data. As an activation function we will use a unit step function: if the output is above a certain threshold the output will be activated (hence a 0 versus 1 binary classifier). A perceptron is able to classify classes with 100% accuracy if the classes are linearly separable. In the next recipe, we will show you how to implement a perceptron with NumPy. 

主站蜘蛛池模板: 双流县| 江西省| 鹤壁市| 布尔津县| 通许县| 洛宁县| 潍坊市| 余干县| 恭城| 澄江县| 玛纳斯县| 秦安县| 竹溪县| 龙井市| 灌南县| 肃北| 宁陕县| 开化县| 宜兰市| 齐齐哈尔市| 招远市| 辛集市| 弥勒县| 青州市| 嘉义县| 临海市| 平阴县| 贺州市| 长阳| 泗阳县| 子长县| 镶黄旗| 德州市| 永德县| 中卫市| 高碑店市| 呼玛县| 大悟县| 尉氏县| 巫溪县| 天镇县|