官术网_书友最值得收藏!

Problems in training the perceptron and a solution

Let's consider a single neuron; what are the best choices for the weight w and the bias b? Ideally, we would like to provide a set of training examples and let the computer adjust the weight and the bias in such a way that the errors produced in the output are minimized. In order to make this a bit more concrete, let's suppose we have a set of images of cats and another separate set of images not containing cats. For the sake of simplicity, assume that each neuron looks at a single input pixel value. While the computer processes these images, we would like our neuron to adjust its weights and bias so that we have fewer and fewer images wrongly recognized as non-cats. This approach seems very intuitive, but it requires that a small change in weights (and/or bias) causes only a small change in outputs.

If we have a big output jump, we cannot progressively learn (rather than trying things in all possible directions—a process known as exhaustive search—without knowing if we are improving). After all, kids learn little by little. Unfortunately, the perceptron does not show this little-by-little behavior. A perceptron is either 0 or 1 and that is a big jump and it will not help it to learn, as shown in the following graph:

We need something different, smoother. We need a function that progressively changes from 0 to 1 with no discontinuity. Mathematically, this means that we need a continuous function that allows us to compute the derivative.

主站蜘蛛池模板: 孟州市| 揭阳市| 洛浦县| 望江县| 宣化县| 治多县| 华容县| 海兴县| 黔西县| 碌曲县| 福建省| 临西县| 方城县| 陕西省| 龙江县| 赤壁市| 淮南市| 兴仁县| 丘北县| 武邑县| 台州市| 广元市| 玉溪市| 甘泉县| 宝应县| 丹东市| 固阳县| 杨浦区| 宁夏| 武川县| 忻城县| 化州市| 师宗县| 济源市| 修武县| 九台市| 杂多县| 定南县| 铁岭县| 侯马市| 靖西县|