官术网_书友最值得收藏!

  • Deep Learning Essentials
  • Wei Di Anurag Bhardwaj Jianing Wei
  • 415字
  • 2021-06-30 19:17:43

Hierarchical feature representation

The learnt features capture both local and inter-relationships for the data as a whole, it is not only the learnt features that are distributed, the representations also come hierarchically structured. The previous figure, Comparing deep and shallow architecture. It can be seen that shallow architecture has a more flat topology, while deep architecture has many layers of hierarchical topology compares the typical structure of shallow versus deep architectures, where we can see that the shallow architecture often has a flat structure with one layer at most, whereas the deep architecture structures have multiple layers, and lower layers are composited that serve as input to the higher layer. The following figure uses a more concrete example to show what information has been learned through layers of the hierarchy.

As shown in the image, the lower layer focuses on edges or colors, while higher layers often focus more on patches, curves, and shapes. Such representation effectively captures part-and-whole relationships from various granularity and naturally addresses multi-task problems, for example, edge detection or part recognition. The lower layer often represents the basic and fundamental information that can be used for many distinct tasks in a wide variety of domains. For example, Deep Belief networks have been successfully used to learn high-level structures in a wide variety of domains, including handwritten digits and human motion capture data. The hierarchical structure of representation mimics the human understanding of concepts, that is, learning simple concepts first and then successfully building up more complex concepts by composing the simpler ones together. It is also easier to monitor what is being learnt and to guide the machine to better subspaces. If one treats each neuron as a feature detector, then deep architectures can be seen as consisting of feature detector units arranged in layers. Lower layers detect simple features and feed into higher layers, which in turn detect more complex features. If the feature is detected, the responsible unit or units generate large activations, which can be picked up by the later classifier stages as a good indicator that the class is present:

Illustration of hierarchical features learned from a deep learning algorithm.  Image by Honglak Lee and colleagues as  published  in Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations , 2009

The above figure illustrates that each feature can be thought of as a detector, which tries to the detector a particular feature (blob, edges, nose, or eye) on the input image. 

主站蜘蛛池模板: 金沙县| 马关县| 海淀区| 民勤县| 大兴区| 晋城| 勐海县| 穆棱市| 临汾市| 武宁县| 华坪县| 长治市| 城固县| 新河县| 汉川市| 丹棱县| 桂林市| 伽师县| 邢台市| 平定县| 兴海县| 葵青区| 育儿| 韶关市| 丰镇市| 牟定县| 台中市| 梧州市| 财经| 兴业县| 沅陵县| 含山县| 工布江达县| 达拉特旗| 黎平县| 定襄县| 沅江市| 滕州市| 广水市| 东明县| 方正县|