官术网_书友最值得收藏!

  • Python Deep Learning
  • Ivan Vasilev Daniel Slater Gianmario Spacagna Peter Roelants Valentino Zocca
  • 564字
  • 2021-07-02 14:31:07

Feature learning

To illustrate how deep learning works, let's consider the task of recognizing a simple geometric figure, for example, a cube, as seen in the following diagram. The cube is composed of edges (or lines), which intersect in vertices. Let's say that each possible point in the three-dimensional space is associated with a neuron (forget for a moment that this will require an infinite number of neurons). All the points/neurons are in the first (input) layer of a multi-layer feed-forward network. An input point/neuron is active if the corresponding point lies on a line. The points/neurons that lie on a common line (edge) have strong positive connections to a single common edge/neuron in the next layer. Conversely, they have negative connections to all other neurons in the next layer. The only exception are the neurons that lie on the vertices. Each such neuron lies simultaneously on three edges, and is connected to its three corresponding neurons in the subsequent layer.

Now we have two hidden layers, with different levels of abstraction—the first for points and the second for edges. But this is not enough to encode a whole cube in the network. Let's try with another layer for vertices. Here, each three active edge/neurons of the second layer, which form a vertex, have a significant positive connection to a single common vertex/neuron of the third layer. Since an edge of the cube forms two vertices, each edge/neuron will have positive connections to two vertices/neurons and negative connections to all others. Finally, we'll introduce the last hidden layer (cube). The four vertices/neurons forming a cube will have positive connections to a single cube/neuron from the cube/layer:

An abstraction of a neural network representing a cube. Different layers encode features with different levels of abstraction

The cube representation example is oversimplified, but we can draw several conclusions from it. One of them is that deep neural networks lend themselves well to hierarchically organized data. For example, an image consists of pixels, which form lines, edges, regions, and so on. This is also true for speech, where the building blocks are called phonemes; as well as text, where we have characters, words, and sentences.

In the preceding example, we dedicated layers to specific cube features deliberately, but in practice, we wouldn't do that. Instead, a deep network will "discover" features automatically during training. These features might not be immediately obvious and, in general, wouldn't be interpretable by humans. Also, we wouldn't know the level of the features encoded in the different layers of the network. Our example is more akin to classic machine learning algorithms, where the user has to use his/her own experience to select what they think are the best features. This process is called feature engineering, and it can be labor-intensive and time-consuming. Allowing a network to automatically discover features is not only easier, but those features are highly abstract, which makes them less sensitive to noise. For example, human vision can recognize objects of different shapes, sizes, in different lighting conditions, and even when their view is partly obscured. We can recognize people with different haircuts, facial features, and even when they wear a hat or a scarf that covers their mouth. Similarly, the abstract features the network learns will help it to recognize faces better, even in more challenging conditions.

主站蜘蛛池模板: 永春县| 进贤县| 德安县| 盘山县| 剑河县| 本溪市| 光泽县| 错那县| 宁安市| 淄博市| 武隆县| 罗城| 读书| 白山市| 菏泽市| 博白县| 永平县| 东光县| 吴江市| 平阳县| 青冈县| 桃园市| 阳泉市| 绥化市| 苗栗县| 开平市| 万载县| 同心县| 涪陵区| 庆云县| 萍乡市| 花莲县| 习水县| 崇义县| 普安县| 磐安县| 赫章县| 万宁市| 左云县| 岱山县| 康定县|