官术网_书友最值得收藏!

The need for Dropout

Now, let's go back to our much-needed discussion about Dropout. We use dropout in deep learning as a way of randomly cutting network connections between layers during each iteration. An example showing an iteration of dropout being applied to three network layers is shown in the following diagram:



Before and after dropout

The important thing to understand is that the same connections are not always cut. This is done to allow the network to become less specialized and more generalized. Generalizing a model is a common theme in deep learning, and we often do this so our models can learn a broader set of problems, more quickly. Of course, there may be times where generalizing a network limits a network's ability to learn.

If we go back to the previous sample now and look at the code, we see a Dropout layer being used like so:

x = Dropout(.5)(x)

That simple line of code tells the network to drop out or disconnect 50% of the connections randomly after every iteration. Dropout only works for fully connected layers (Input -> Dense -> Dense) but is very useful as a way of improving performance or accuracy. This may or may not account for some of the improved performance from the previous example.

In the next section, we will look at how deep learning mimics the memory sub-process or temporal scent.

主站蜘蛛池模板: 安泽县| 白沙| 罗平县| 香河县| 镇远县| 蚌埠市| 光泽县| 固安县| 肥城市| 峨边| 平原县| 武功县| 永康市| 水富县| 凌海市| 广昌县| 景东| 桦南县| 莆田市| 德保县| 甘孜| 永嘉县| 丹寨县| 石阡县| 吴堡县| 黔东| 富蕴县| 大同市| 夏津县| 凤阳县| 五台县| 洪洞县| 临城县| 嘉义市| 民丰县| 台南市| 黔西县| 上饶县| 新营市| 石阡县| 东至县|