官术网_书友最值得收藏!

Defining feedforward networks

Deep feedforward networks, also called feedforward neural networks, are sometimes also referred to as Multilayer Perceptrons (MLPs). The goal of a feedforward network is to approximate the function of f?. For example, for a classi?er, y=f?(x) maps an input x to a label y. A feedforward network defines a mapping from input to label y=f(x;θ). It learns the value of the parameter θ that results in the best function approximation.

We discuss RNNs in Chapter 5Recurrent Neural Networks. Feedforward networks are a conceptual stepping stone on the path to recurrent networks, which power many natural language applications. Feedforward neural networks are called networks because they compose together many di?erent functions which represent them. These functions are composed in a directed acyclic graph.

The model is associated with a directed acyclic graph describing how the functions are composed together. For example, there are three functions f(1), f(2), and f(3) connected to form f(x) =f(3)(f(2)(f(1)(x))). These chain structures are the most commonly used structures of neural networks. In this case, f(1) is called the first layer of the network, f(2) is called the second layer, and so on. The overall length of the chain gives the depth of the model. It is from this terminology that the name deep learning arises. The final layer of a feedforward network is called the output layer.

Diagram showing various functions activated on input x to form a neural network

These networks are called neural because they are inspired by neuroscience. Each hidden layer is a vector. The dimensionality of these hidden layers determines the width of the model.

主站蜘蛛池模板: 广南县| 南乐县| 永年县| 浠水县| 昌江| 敖汉旗| 潮州市| 九龙县| 琼中| 望奎县| 丹江口市| 红河县| 元谋县| 华池县| 云阳县| 江门市| 陆川县| 安徽省| 孟村| 行唐县| 噶尔县| 周口市| 吴桥县| 武胜县| 崇州市| 绥滨县| 高雄县| 竹山县| 治多县| 社旗县| 阿瓦提县| 内乡县| 祥云县| 潮安县| 梁河县| 汾西县| 余姚市| 宾川县| 凉山| 互助| 南汇区|