- R Deep Learning Essentials
- Mark Hodnett Joshua F. Wiley
- 441字
- 2021-08-13 15:34:27
Neural networks as a network of memory cells
Another way to consider neural networks is to compare them to how humans think. As their name suggests, neural networks draw inspiration from neural processes and neurons in the mind. Neural networks contain a series of neurons, or nodes, which are interconnected and process input. The neurons have weights that are learned from previous observations (data). The output of a neuron is a function of its input and its weights. The activation of some final neuron(s) is the prediction.
We will consider a hypothetical case where a small part of the brain is responsible for matching basic shapes, such as squares and circles. In this scenario, some neurons at the basic level fire for horizontal lines, another set of neurons fires for vertical lines, and yet another set of neurons fire for curved segments. These neurons feed into higher-order process that combines the input so that it recognizes more complex objects, for example, a square when the horizontal and vertical neurons both are activated simultaneously.
In the following diagram, the input data is represented as squares. These could be pixels in an image. The next layer of hidden neurons consists of neurons that recognize basic features, such as horizontal lines, vertical lines, or curved lines. Finally, the output may be a neuron that is activated by the simultaneous activation of two of the hidden neurons:

In this example, the first node in the hidden layer is good at matching horizontal lines, while the second node in the hidden layer is good at matching vertical lines. These nodes remember what these objects are. If these nodes combine, more sophisticated objects can be detected. For example, if the hidden layer recognizes horizontal lines and vertical lines, the object is more likely to be a square than a circle. This is similar to how convolutional neural networks work, which we will cover in Chapter 5, Image Classification Using Convolutional Neural Networks.
We have covered the theory behind neural networks very superficially here as we do not want to overwhelm you in the first chapter! In future chapters, we will cover some of these issues in more depth, but in the meantime, if you wish to get a deeper understanding of the theory behind neural networks, the following resources are recommended:
- Chapter 6 of Goodfellow-et-al (2016)
- Chapter 11 of Hastie, T., Tibshirani, R., and Friedman, J. (2009), which is freely available at https://web.stanford.edu/~hastie/Papers/ESLII.pdf
- Chapter 16 of Murphy, K. P. (2012)
Next, we will turn to a brief introduction to deep neural networks.
- BeagleBone By Example
- 嵌入式技術(shù)基礎(chǔ)與實踐(第5版)
- 深入淺出SSD:固態(tài)存儲核心技術(shù)、原理與實戰(zhàn)(第2版)
- Artificial Intelligence Business:How you can profit from AI
- 計算機(jī)維修與維護(hù)技術(shù)速成
- Hands-On Machine Learning with C#
- 筆記本電腦應(yīng)用技巧
- 計算機(jī)組裝維修與外設(shè)配置(高等職業(yè)院校教改示范教材·計算機(jī)系列)
- 基于PROTEUS的電路設(shè)計、仿真與制板
- Java Deep Learning Cookbook
- Angular 6 by Example
- 現(xiàn)代多媒體技術(shù)及應(yīng)用
- Learn Qt 5
- 51單片機(jī)應(yīng)用開發(fā)從入門到精通
- Windows Presentation Foundation 4.5 Cookbook