- Hands-On Q-Learning with Python
- Nazia Habib
- 185字
- 2021-06-24 15:13:11
Value-based versus policy-based iteration
We'll be using value-based iteration for the projects in this book. The description of the Bellman equation given previously offers a very high-level understanding of how value-based iteration works. The main difference is that in value-based iteration, the agent learns the expected reward value of each state-action pair, and in policy-based iteration, the agent learns the function that maps states to actions.
One simple way to describe this difference is that a value-based agent, when it has mastered its environment, will not explicitly be able to simulate that environment. It will not be able to give an actual function that maps states to actions. A policy-based agent, on the other hand, will be able to give that function.
Note that Q-learning and SARSA are both value-based algorithms. Because we are working with Q-learning in this book, we will not study policy-based iteration in detail here. The main thing to bear in mind about policy-based iteration is that it gives us the ability to learn stochastic policies and it is more useful for working with continuous action spaces.
- Internet接入·網絡安全
- Hands-On Intelligent Agents with OpenAI Gym
- Java編程全能詞典
- TestStand工業自動化測試管理(典藏版)
- Security Automation with Ansible 2
- Supervised Machine Learning with Python
- Implementing Oracle API Platform Cloud Service
- Blender 3D Printing by Example
- Deep Reinforcement Learning Hands-On
- DevOps Bootcamp
- 電腦上網輕松入門
- Working with Linux:Quick Hacks for the Command Line
- 在實戰中成長:C++開發之路
- 寒江獨釣:Windows內核安全編程
- Hands-On SAS for Data Analysis