官术网_书友最值得收藏!

  • Python Deep Learning
  • Ivan Vasilev Daniel Slater Gianmario Spacagna Peter Roelants Valentino Zocca
  • 564字
  • 2021-07-02 14:31:07

Feature learning

To illustrate how deep learning works, let's consider the task of recognizing a simple geometric figure, for example, a cube, as seen in the following diagram. The cube is composed of edges (or lines), which intersect in vertices. Let's say that each possible point in the three-dimensional space is associated with a neuron (forget for a moment that this will require an infinite number of neurons). All the points/neurons are in the first (input) layer of a multi-layer feed-forward network. An input point/neuron is active if the corresponding point lies on a line. The points/neurons that lie on a common line (edge) have strong positive connections to a single common edge/neuron in the next layer. Conversely, they have negative connections to all other neurons in the next layer. The only exception are the neurons that lie on the vertices. Each such neuron lies simultaneously on three edges, and is connected to its three corresponding neurons in the subsequent layer.

Now we have two hidden layers, with different levels of abstraction—the first for points and the second for edges. But this is not enough to encode a whole cube in the network. Let's try with another layer for vertices. Here, each three active edge/neurons of the second layer, which form a vertex, have a significant positive connection to a single common vertex/neuron of the third layer. Since an edge of the cube forms two vertices, each edge/neuron will have positive connections to two vertices/neurons and negative connections to all others. Finally, we'll introduce the last hidden layer (cube). The four vertices/neurons forming a cube will have positive connections to a single cube/neuron from the cube/layer:

An abstraction of a neural network representing a cube. Different layers encode features with different levels of abstraction

The cube representation example is oversimplified, but we can draw several conclusions from it. One of them is that deep neural networks lend themselves well to hierarchically organized data. For example, an image consists of pixels, which form lines, edges, regions, and so on. This is also true for speech, where the building blocks are called phonemes; as well as text, where we have characters, words, and sentences.

In the preceding example, we dedicated layers to specific cube features deliberately, but in practice, we wouldn't do that. Instead, a deep network will "discover" features automatically during training. These features might not be immediately obvious and, in general, wouldn't be interpretable by humans. Also, we wouldn't know the level of the features encoded in the different layers of the network. Our example is more akin to classic machine learning algorithms, where the user has to use his/her own experience to select what they think are the best features. This process is called feature engineering, and it can be labor-intensive and time-consuming. Allowing a network to automatically discover features is not only easier, but those features are highly abstract, which makes them less sensitive to noise. For example, human vision can recognize objects of different shapes, sizes, in different lighting conditions, and even when their view is partly obscured. We can recognize people with different haircuts, facial features, and even when they wear a hat or a scarf that covers their mouth. Similarly, the abstract features the network learns will help it to recognize faces better, even in more challenging conditions.

主站蜘蛛池模板: 砚山县| 崇礼县| 娱乐| 镇平县| 西乌珠穆沁旗| 赤城县| 聊城市| 丁青县| 葵青区| 万山特区| 滨海县| 镇沅| 军事| 府谷县| 桂林市| 舟曲县| 兴隆县| 保康县| 且末县| 长顺县| 固安县| 郴州市| 南召县| 贵州省| 安仁县| 新田县| 名山县| 合作市| 崇仁县| 文登市| 广宁县| 驻马店市| 兴文县| 拜城县| 保亭| 昭平县| 康保县| 福建省| 钟祥市| 七台河市| 龙川县|