- Hands-On Neural Networks
- Leonardo De Marchi Laura Mitchell
- 172字
- 2021-06-24 14:00:11
How deep learning performs feature engineering
The theoretical advantage of neural networks is that they are universal approximators. The Universal Approximation Theorem states that a feed-forward network with a single hidden layer, a finite number of neurons, and some assumptions regarding the activation function can approximate any continuous functions. However, this theorem does not specify whether the parameters of the network are learnable algorithmically.
In practice, layers are added to the network to increase the non-linearity of the approximated function, and there is a lot of empirical evidence that the deeper the network is and the more the data we feed into the network, the better the results will be. There are some caveats on this statement that we will see later on in this book.
Nevertheless, there are some deep learning tasks that still require feature engineering—for example, natural Language processing (NLP). In this case, feature engineering can be anything from dividing the text into small subsets, called n-grams, to a vectorized representation using, for example, word embedding.
- 工業機器人虛擬仿真實例教程:KUKA.Sim Pro(全彩版)
- Visualforce Development Cookbook(Second Edition)
- Getting Started with Clickteam Fusion
- PHP開發手冊
- AutoCAD 2012中文版繪圖設計高手速成
- 精通數據科學算法
- Learn CloudFormation
- Kubernetes for Serverless Applications
- 傳感器與新聞
- 貫通Java Web開發三劍客
- 網站前臺設計綜合實訓
- 大數據技術基礎:基于Hadoop與Spark
- 液壓機智能故障診斷方法集成技術
- 空間機器人
- Hands-On SAS for Data Analysis