官术网_书友最值得收藏!

Problems in training the perceptron and a solution

Let's consider a single neuron; what are the best choices for the weight w and the bias b? Ideally, we would like to provide a set of training examples and let the computer adjust the weight and the bias in such a way that the errors produced in the output are minimized. In order to make this a bit more concrete, let's suppose we have a set of images of cats and another separate set of images not containing cats. For the sake of simplicity, assume that each neuron looks at a single input pixel value. While the computer processes these images, we would like our neuron to adjust its weights and bias so that we have fewer and fewer images wrongly recognized as non-cats. This approach seems very intuitive, but it requires that a small change in weights (and/or bias) causes only a small change in outputs.

If we have a big output jump, we cannot progressively learn (rather than trying things in all possible directions—a process known as exhaustive search—without knowing if we are improving). After all, kids learn little by little. Unfortunately, the perceptron does not show this little-by-little behavior. A perceptron is either 0 or 1 and that is a big jump and it will not help it to learn, as shown in the following graph:

We need something different, smoother. We need a function that progressively changes from 0 to 1 with no discontinuity. Mathematically, this means that we need a continuous function that allows us to compute the derivative.

主站蜘蛛池模板: 沅陵县| 神木县| 龙胜| 秦皇岛市| 克东县| 晋中市| 南郑县| 清新县| 永定县| 闵行区| 定日县| 于都县| 德令哈市| 盱眙县| 白水县| 瑞昌市| 玛沁县| 颍上县| 武平县| 和田县| 桦川县| 渝北区| 湾仔区| 勃利县| 宿迁市| 仁布县| 正镶白旗| 长宁县| 五原县| 固阳县| 东宁县| 龙南县| 章丘市| 通辽市| 黄大仙区| 鄂托克前旗| 金昌市| 绥德县| 五华县| 夏津县| 调兵山市|