舉報

會員
Hands-On Deep Learning for Games
Thenumberofapplicationsofdeeplearningandneuralnetworkshasmultipliedinthelastcoupleofyears.Neuralnetshasenabledsignificantbreakthroughsineverythingfromcomputervision,voicegeneration,voicerecognitionandself-drivingcars.Gamedevelopmentisalsoakeyareawherethesetechniquesarebeingapplied.Thisbookwillgiveanindepthviewofthepotentialofdeeplearningandneuralnetworksingamedevelopment.Wewilltakealookatthefoundationsofmulti-layerperceptron’stousingconvolutionalandrecurrentnetworks.InapplicationsfromGANsthatcreatemusicortexturestoself-drivingcarsandchatbots.Thenweintroducedeepreinforcementlearningthroughthemulti-armedbanditproblemandotherOpenAIGymenvironments.AsweprogressthroughthebookwewillgaininsightsaboutDRLtechniquessuchasMotivatedReinforcementLearningwithCuriosityandCurriculumLearning.WealsotakeacloserlookatdeepreinforcementlearningandinparticulartheUnityML-Agentstoolkit.Bytheendofthebook,wewilllookathowtoapplyDRLandtheML-Agentstoolkittoenhance,testandautomateyourgamesorsimulations.Finally,wewillcoveryourpossiblenextstepsandpossibleareasforfuturelearning.
目錄(192章)
倒序
- coverpage
- Title Page
- Copyright and Credits
- Hands-On Deep Learning for Games
- Dedication
- About Packt
- Why subscribe?
- Packt.com
- Contributors
- About the author
- Packt is searching for authors like you
- Preface
- Who this book is for
- What this book covers
- To get the most out of this book
- Download the example code files
- Download the color images
- Conventions used
- Get in touch
- Reviews
- Section 1: The Basics
- Deep Learning for Games
- The past present and future of DL
- The past
- The present
- The future
- Neural networks – the foundation
- Training a perceptron in Python
- Multilayer perceptron in TF
- TensorFlow Basics
- Training neural networks with backpropagation
- The Cost function
- Partial differentiation and the chain rule
- Building an autoencoder with Keras
- Training the model
- Examining the output
- Exercises
- Summary
- Convolutional and Recurrent Networks
- Convolutional neural networks
- Monitoring training with TensorBoard
- Understanding convolution
- Building a self-driving CNN
- Spatial convolution and pooling
- The need for Dropout
- Memory and recurrent networks
- Vanishing and exploding gradients rescued by LSTM
- Playing Rock Paper Scissors with LSTMs
- Exercises
- Summary
- GAN for Games
- Introducing GANs
- Coding a GAN in Keras
- Training a GAN
- Optimizers
- Wasserstein GAN
- Generating textures with a GAN
- Batch normalization
- Leaky and other ReLUs
- A GAN for creating music
- Training the music GAN
- Generating music via an alternative GAN
- Exercises
- Summary
- Building a Deep Learning Gaming Chatbot
- Neural conversational agents
- General conversational models
- Sequence-to-sequence learning
- Breaking down the code
- Thought vectors
- DeepPavlov
- Building the chatbot server
- Message hubs (RabbitMQ)
- Managing RabbitMQ
- Sending and receiving to/from the MQ
- Writing the message queue chatbot
- Running the chatbot in Unity
- Installing AMQP for Unity
- Exercises
- Summary
- Section 2: Deep Reinforcement Learning
- Introducing DRL
- Reinforcement learning
- The multi-armed bandit
- Contextual bandits
- RL with the OpenAI Gym
- A Q-Learning model
- Markov decision process and the Bellman equation
- Q-learning
- Q-learning and exploration
- First DRL with Deep Q-learning
- RL experiments
- Keras RL
- Exercises
- Summary
- Unity ML-Agents
- Installing ML-Agents
- Training an agent
- What's in a brain?
- Monitoring training with TensorBoard
- Running an agent
- Loading a trained brain
- Exercises
- Summary
- Agent and the Environment
- Exploring the training environment
- Training the agent visually
- Reverting to the basics
- Understanding state
- Understanding visual state
- Convolution and visual state
- To pool or not to pool
- Recurrent networks for remembering series
- Tuning recurrent hyperparameters
- Exercises
- Summary
- Understanding PPO
- Marathon RL
- The partially observable Markov decision process
- Actor-Critic and continuous action spaces
- Expanding network architecture
- Understanding TRPO and PPO
- Generalized advantage estimate
- Learning to tune PPO
- Coding changes required for control projects
- Multiple agent policy
- Exercises
- Summary
- Rewards and Reinforcement Learning
- Rewards and reward functions
- Building reward functions
- Sparsity of rewards
- Curriculum Learning
- Understanding Backplay
- Implementing Backplay through Curriculum Learning
- Curiosity Learning
- The Curiosity Intrinsic module in action
- Trying ICM on Hallway/VisualHallway
- Exercises
- Summary
- Imitation and Transfer Learning
- IL or behavioral cloning
- Online training
- Offline training
- Setting up for training
- Feeding the agent
- Transfer learning
- Transferring a brain
- Exploring TensorFlow checkpoints
- Imitation Transfer Learning
- Training multiple agents with one demonstration
- Exercises
- Summary
- Building Multi-Agent Environments
- Adversarial and cooperative self-play
- Training self-play environments
- Adversarial self-play
- Multi-brain play
- Adding individuality with intrinsic rewards
- Extrinsic rewards for individuality
- Creating uniqueness with customized reward functions
- Configuring the agents' personalities
- Exercises
- Summary
- Section 3: Building Games
- Debugging/Testing a Game with DRL
- Introducing the game
- Setting up ML-Agents
- Introducing rewards to the game
- Setting up TestingAcademy
- Scripting the TestingAgent
- Setting up the TestingAgent
- Overriding the Unity input system
- Building the TestingInput
- Adding TestingInput to the scene
- Overriding the game input
- Configuring the required brains
- Time for training
- Testing through imitation
- Configuring the agent to use IL
- Analyzing the testing process
- Sending custom analytics
- Exercises
- Summary
- Obstacle Tower Challenge and Beyond
- The Unity Obstacle Tower Challenge
- Deep Learning for your game?
- Building your game
- More foundations of learning
- Summary
- Other Books You May Enjoy
- Leave a review - let other readers know what you think 更新時間:2021-06-24 15:48:33
推薦閱讀
- 復雜性思考:復雜性科學和計算模型(原書第2版)
- 計算機信息技術基礎實驗與習題
- 正則表達式必知必會
- Libgdx Cross/platform Game Development Cookbook
- 醫療大數據挖掘與可視化
- 大數據:規劃、實施、運維
- Live Longer with AI
- 揭秘云計算與大數據
- Sybase數據庫在UNIX、Windows上的實施和管理
- 大話Oracle Grid:云時代的RAC
- 金融商業算法建模:基于Python和SAS
- gnuplot Cookbook
- 企業大數據處理:Spark、Druid、Flume與Kafka應用實踐
- Kubernetes快速進階與實戰
- Artificial Intelligence for Big Data
- 數據中心UPS系統運維
- 工業大數據分析實踐
- 劍指大數據:Flink實時數據倉庫項目實戰(電商版)
- PostgreSQL實戰
- TestComplete Cookbook
- Hands-On Design Patterns with C# and .NET Core
- 阿里云數字新基建系列:云數據庫架構
- DB2數據庫管理最佳實踐
- 實戰大數據:分布式大數據分析處理系統開發與應用
- 低代碼極速物聯網開發指南:基于阿里云IoT Studio快速構建物聯網項目
- Splunk實踐指南
- 技術人修煉之道:從程序員到百萬高管的72項技能
- SQL Server 2012數據庫項目教程
- 深度學習實踐:計算機視覺
- Hands-On High Performance with Spring 5