- Deep Learning with R for Beginners
- Mark Hodnett Joshua F. Wiley Yuxi (Hayden) Liu Pablo Maldonado
- 441字
- 2021-06-24 14:30:37
Neural networks as a network of memory cells
Another way to consider neural networks is to compare them to how humans think. As their name suggests, neural networks draw inspiration from neural processes and neurons in the mind. Neural networks contain a series of neurons, or nodes, which are interconnected and process input. The neurons have weights that are learned from previous observations (data). The output of a neuron is a function of its input and its weights. The activation of some final neuron(s) is the prediction.
We will consider a hypothetical case where a small part of the brain is responsible for matching basic shapes, such as squares and circles. In this scenario, some neurons at the basic level fire for horizontal lines, another set of neurons fires for vertical lines, and yet another set of neurons fire for curved segments. These neurons feed into higher-order process that combines the input so that it recognizes more complex objects, for example, a square when the horizontal and vertical neurons both are activated simultaneously.
In the following diagram, the input data is represented as squares. These could be pixels in an image. The next layer of hidden neurons consists of neurons that recognize basic features, such as horizontal lines, vertical lines, or curved lines. Finally, the output may be a neuron that is activated by the simultaneous activation of two of the hidden neurons:

In this example, the first node in the hidden layer is good at matching horizontal lines, while the second node in the hidden layer is good at matching vertical lines. These nodes remember what these objects are. If these nodes combine, more sophisticated objects can be detected. For example, if the hidden layer recognizes horizontal lines and vertical lines, the object is more likely to be a square than a circle. This is similar to how convolutional neural networks work, which we will cover in Chapter 5, Image Classification Using Convolutional Neural Networks.
We have covered the theory behind neural networks very superficially here as we do not want to overwhelm you in the first chapter! In future chapters, we will cover some of these issues in more depth, but in the meantime, if you wish to get a deeper understanding of the theory behind neural networks, the following resources are recommended:
- Chapter 6 of Goodfellow-et-al (2016)
- Chapter 11 of Hastie, T., Tibshirani, R., and Friedman, J. (2009), which is freely available at https://web.stanford.edu/~hastie/Papers/ESLII.pdf
- Chapter 16 of Murphy, K. P. (2012)
Next, we will turn to a brief introduction to deep neural networks.
- 數(shù)據(jù)庫(kù)原理及應(yīng)用教程(第4版)(微課版)
- SQL Server 2016 數(shù)據(jù)庫(kù)教程(第4版)
- 區(qū)塊鏈通俗讀本
- Proxmox VE超融合集群實(shí)踐真?zhèn)?/a>
- 高維數(shù)據(jù)分析預(yù)處理技術(shù)
- Splunk智能運(yùn)維實(shí)戰(zhàn)
- TextMate How-to
- 視覺(jué)大數(shù)據(jù)智能分析算法實(shí)戰(zhàn)
- 深入理解InfluxDB:時(shí)序數(shù)據(jù)庫(kù)詳解與實(shí)踐
- Unity 2018 By Example(Second Edition)
- 機(jī)器學(xué)習(xí):實(shí)用案例解析
- 離線和實(shí)時(shí)大數(shù)據(jù)開(kāi)發(fā)實(shí)戰(zhàn)
- Access 2016數(shù)據(jù)庫(kù)應(yīng)用基礎(chǔ)
- 云工作時(shí)代:科技進(jìn)化必將帶來(lái)的新工作方式
- 推薦系統(tǒng)全鏈路設(shè)計(jì):原理解讀與業(yè)務(wù)實(shí)踐