- Python Deep Learning
- Ivan Vasilev Daniel Slater Gianmario Spacagna Peter Roelants Valentino Zocca
- 391字
- 2021-07-02 14:31:08
Deep networks
We could define deep learning as a class of machine learning techniques, where information is processed in hierarchical layers to understand representations and features from data in increasing levels of complexity. In practice, all deep learning algorithms are neural networks, which share some common basic properties. They all consist of interconnected neurons that are organized in layers. Where they differ is network architecture (or the way neurons are organized in the network), and sometimes in the way they are trained. With that in mind, let's look at the main classes of neural networks. The following list is not exhaustive, but it represents the vast majority of algorithms in use today:
- Multi-layer perceptrons (MLPs): A neural network with feed-forward propagation, fully-connected layers, and at least one hidden layer. We introduced MLPs in Chapter 2, Neural Networks.
- Convolutional neural networks (CNNs): A CNN is a feedforward neural network with several types of special layers. For example, convolutional layers apply a filter to the input image (or sound) by sliding that filter all across the incoming signal, to produce an n-dimensional activation map. There is some evidence that neurons in CNNs are organized similarly to how biological cells are organized in the visual cortex of the brain. We've mentioned CNNs several times up to now, and that's not a coincidence – today, they outperform all other ML algorithms on a large number of computer vision and NLP tasks.
- Recurrent networks: This type of network has an internal state (or memory), which is based on all or part of the input data already fed to the network. The output of a recurrent network is a combination of its internal state (memory of inputs) and the latest input sample. At the same time, the internal state changes, to incorporate newly input data. Because of these properties, recurrent networks are good candidates for tasks that work on sequential data, such as text or time-series data. We'll discuss recurrent networks in Chapter 7, Recurrent Neural Networks and Language Models.
- Autoencoders: A class of unsupervised learning algorithms, in which the output shape is the same as the input that allows the network to better learn basic representations. We'll discuss autoencoders when we talk about generative deep learning, in Chapter 6, Generating Images with GANs and VAEs.
推薦閱讀
- Access 數據庫應用教程
- NLTK基礎教程:用NLTK和Python庫構建機器學習應用
- Flink SQL與DataStream入門、進階與實戰
- Mastering Concurrency in Go
- C#程序設計(慕課版)
- Java游戲服務器架構實戰
- Swift Playgrounds少兒趣編程
- C#應用程序設計教程
- 平面設計經典案例教程:CorelDRAW X6
- 網絡數據采集技術:Java網絡爬蟲實戰
- 硬件產品設計與開發:從原型到交付
- Android Game Programming by Example
- 精通Spring:Java Web開發與Spring Boot高級功能
- 面向對象程序設計及C++(第3版)
- Learning Cocos2d-JS Game Development