官术网_书友最值得收藏!

Convolutional and Recurrent Networks

The human brain is often the main inspiration and comparison we make when building AI and is something deep learning researchers often look to for inspiration or reassurance. By studying the brain and its parts in more detail, we often discover neural sub-processes. An example of a neural sub-process would be our visual cortex, the area or region of our brain responsible for vision. We now understand that this area of our brain is wired differently and responds differently to input. This just so happens to be analogous to analog what we have found in our previous attempts at using neural networks to classify images. Now, the human brain has many sub-processes all with specific mapped areas in the brain (sight, hearing, smell, speech, taste, touch, and memory/temporal), but in this chapter, we will look at how we model just sight and memory by using advanced forms of deep learning called convolutional and recurrent networks. The two-core sub-processes of sight and memory are used extensively by us for many tasks including gaming and form the focus of research of many deep learners.

Researchers often look to the brain for inspiration, but the computer models they build often don't entirely resemble their biological counterpart. However, researchers have begun to identify almost perfect analogs to neural networks inside our brains. One example of this is the ReLU activation function. It was recently found that the excitement level in our brains' neurons, when plotted, perfectly matched a ReLU graph. 

In this chapter, we will explore, in some detail, convolutional and recurrent neural networks. We will look at how they solve the problem of replicating accurate vision and memory in deep learning. These two new network or layer types are a fairly recent discovery but have been responsible in part for many advances in deep learning. This chapter will cover the following topics:

  • Convolutional neural networks
  • Understanding convolution
  • Building a self-driving CNN
  • Memory and recurrent networks
  • Playing rock, paper, scissors with LSTMs

Be sure you understand the fundamentals outlined in the previous chapter reasonably well before proceeding. This includes running the code samples, which install this chapter's required dependencies.

主站蜘蛛池模板: 阿拉善右旗| 吉隆县| 绵竹市| 江口县| 利川市| 名山县| 兖州市| 且末县| 武平县| 库尔勒市| 罗平县| 涟水县| 崇州市| 五河县| 沂南县| 天峻县| 自治县| 汪清县| 桐柏县| 都江堰市| 木里| 抚松县| 行唐县| 松阳县| 通河县| 佳木斯市| 蒙城县| 元阳县| 洛隆县| 文登市| 无为县| 新野县| 福清市| 淳安县| 望都县| 江山市| 稷山县| 文登市| 城市| 师宗县| 运城市|