官术网_书友最值得收藏!

Summary

In this chapter, we explored the foundations of DL from the basics of the simple single perceptron to more complex multilayer perceptron models. We started with the past, present, and future of DL and, from there, we built a basic reference implementation of a single perceptron so that we could understand the raw simplicity of DL. Then we built on our knowledge by adding more perceptrons into a multiple layer implementation using TF. Using TF allowed us to see how a raw internal model is represented and trained with a much more complex dataset, MNIST. Then we took a long journey through the math, and although a lot of the complex math was abstracted away from us with Keras, we took an in-depth look at how gradient descent and backpropagation work. Finally, we finished off the chapter with another reference implementation from Keras that featured an autoencoder. Auto encoding allows us to train a network with multiple purposes and extends our understanding of how network architecture doesn't have to be linear.

For the next chapter, we will build on our current level of knowledge and discover convolutional and recurrent neural networks. These extensions provide additional capabilities to the base form of a neural network and have played a significant part in our most recent DL advances.

For the next chapter, we will begin our journey into building components for games when we look at another element considered foundational to DL—the GAN. GANs are like a Swiss Army knife in DL and, as we will see in the next chapter, they offer us plenty of uses.

主站蜘蛛池模板: 鱼台县| 讷河市| 绥化市| 蚌埠市| 轮台县| 建宁县| 台中市| 抚州市| 奉节县| 历史| 杭锦旗| 铜鼓县| 镇康县| 奉新县| 蒙山县| 灌云县| 庆元县| 吴江市| 南通市| 奈曼旗| 砚山县| 句容市| 正定县| 万州区| 固镇县| 望城县| 和林格尔县| 铜川市| 长阳| 辽中县| 济阳县| 陵川县| 和田县| 东城区| 麻城市| 开鲁县| 上饶县| 泸定县| 托克托县| 六枝特区| 保亭|