- TensorFlow Reinforcement Learning Quick Start Guide
- Kaushik Balakrishnan
- 226字
- 2021-06-24 15:29:08
Algorithms covered in this book
In Chapter 2, Temporal Difference, SARSA, and Q-Learning, we will look into our first two RL algorithms: Q-learning and SARSA. Both of these algorithms are tabular-based and do not require the use of neural networks. Thus, we will code them in Python and NumPy. In Chapter 3, Deep Q-Network, we will cover DQN and use TensorFlow to code the agent for the rest of the book. We will then train it to play Atari Breakout. In Chapter 4, Double DQN, Dueling Architectures, and Rainbow, we will cover double DQN, dueling network architectures, and rainbow DQN. In Chapter 5, Deep Deterministic Policy Gradient, we will look at our first Actor-Critic RL algorithm called DDPG, learn about policy gradients, and apply them to a continuous action problem. In Chapter 6, Asynchronous Methods – A3C and A2C, we will investigate A3C, which is another RL algorithm that uses a master and several worker processes. In Chapter 7, Trust Region Policy Optimization and Proximal Policy Optimization, we will investigate two more RL algorithms: TRPO and PPO. Finally, we will apply DDPG and PPO to train an agent to drive a car autonomously in Chapter 8, Deep RL Applied to Autonomous Driving. From Chapter 3, Deep Q-Network, to Chapter 8, Deep RL Applied to Autonomous Driving, we'll use TensorFlow agents. Have fun learning RL.
- 工業機器人虛擬仿真實例教程:KUKA.Sim Pro(全彩版)
- AWS:Security Best Practices on AWS
- CSS全程指南
- 統計策略搜索強化學習方法及應用
- RPA(機器人流程自動化)快速入門:基于Blue Prism
- 工業機器人操作與編程
- 多媒體制作與應用
- 在實戰中成長:Windows Forms開發之路
- 云計算和大數據的應用
- Mastering MongoDB 3.x
- 網絡脆弱性掃描產品原理及應用
- Linux Shell Scripting Cookbook(Third Edition)
- Oracle 11g Anti-hacker's Cookbook
- 企業級Web開發實戰
- 精通ROS機器人編程(原書第2版)