官术网_书友最值得收藏!

Summary

In this chapter, we explored the foundations of DL from the basics of the simple single perceptron to more complex multilayer perceptron models. We started with the past, present, and future of DL and, from there, we built a basic reference implementation of a single perceptron so that we could understand the raw simplicity of DL. Then we built on our knowledge by adding more perceptrons into a multiple layer implementation using TF. Using TF allowed us to see how a raw internal model is represented and trained with a much more complex dataset, MNIST. Then we took a long journey through the math, and although a lot of the complex math was abstracted away from us with Keras, we took an in-depth look at how gradient descent and backpropagation work. Finally, we finished off the chapter with another reference implementation from Keras that featured an autoencoder. Auto encoding allows us to train a network with multiple purposes and extends our understanding of how network architecture doesn't have to be linear.

For the next chapter, we will build on our current level of knowledge and discover convolutional and recurrent neural networks. These extensions provide additional capabilities to the base form of a neural network and have played a significant part in our most recent DL advances.

For the next chapter, we will begin our journey into building components for games when we look at another element considered foundational to DL—the GAN. GANs are like a Swiss Army knife in DL and, as we will see in the next chapter, they offer us plenty of uses.

主站蜘蛛池模板: 黄骅市| 资中县| 隆子县| 普兰店市| 曲周县| 靖西县| 西丰县| 大姚县| 北宁市| 光泽县| 织金县| 永靖县| 清徐县| 含山县| 夏津县| 全州县| 永仁县| 偃师市| 吉安县| 襄汾县| 锡林浩特市| 含山县| 乳源| 石河子市| 庆云县| 嘉善县| 赣州市| 家居| 清水县| 喜德县| 惠东县| 仪征市| 桓仁| 始兴县| 襄汾县| 余庆县| 木兰县| 聂荣县| 通辽市| 四子王旗| 平阳县|