官术网_书友最值得收藏!

Hierarchical feature representation

The learnt features capture both local and inter-relationships for the data as a whole, it is not only the learnt features that are distributed, the representations also come hierarchically structured. The previous figure, Comparing deep and shallow architecture. It can be seen that shallow architecture has a more flat topology, while deep architecture has many layers of hierarchical topology compares the typical structure of shallow versus deep architectures, where we can see that the shallow architecture often has a flat structure with one layer at most, whereas the deep architecture structures have multiple layers, and lower layers are composited that serve as input to the higher layer. The following figure uses a more concrete example to show what information has been learned through layers of the hierarchy.

As shown in the image, the lower layer focuses on edges or colors, while higher layers often focus more on patches, curves, and shapes. Such representation effectively captures part-and-whole relationships from various granularity and naturally addresses multi-task problems, for example, edge detection or part recognition. The lower layer often represents the basic and fundamental information that can be used for many distinct tasks in a wide variety of domains. For example, Deep Belief networks have been successfully used to learn high-level structures in a wide variety of domains, including handwritten digits and human motion capture data. The hierarchical structure of representation mimics the human understanding of concepts, that is, learning simple concepts first and then successfully building up more complex concepts by composing the simpler ones together. It is also easier to monitor what is being learnt and to guide the machine to better subspaces. If one treats each neuron as a feature detector, then deep architectures can be seen as consisting of feature detector units arranged in layers. Lower layers detect simple features and feed into higher layers, which in turn detect more complex features. If the feature is detected, the responsible unit or units generate large activations, which can be picked up by the later classifier stages as a good indicator that the class is present:

Illustration of hierarchical features learned from a deep learning algorithm.  Image by Honglak Lee and colleagues as  published  in Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations , 2009

The above figure illustrates that each feature can be thought of as a detector, which tries to the detector a particular feature (blob, edges, nose, or eye) on the input image. 

主站蜘蛛池模板: 杂多县| 卫辉市| 仙桃市| 长治县| 中阳县| 五台县| 松原市| 新竹市| 大足县| 曲沃县| 双江| 博白县| 依安县| 石泉县| 湘潭县| 四平市| 凤翔县| 祁阳县| 新巴尔虎左旗| 抚州市| 北京市| 沂源县| 徐水县| 大悟县| 呼和浩特市| 永昌县| 乐亭县| 丰都县| 孝感市| 古浪县| 阜宁县| 普格县| 平顺县| 新巴尔虎左旗| 商洛市| 苏尼特左旗| 平和县| 沾益县| 乐平市| 兴安县| 波密县|