- Hands-On Q-Learning with Python
- Nazia Habib
- 185字
- 2021-06-24 15:13:11
Value-based versus policy-based iteration
We'll be using value-based iteration for the projects in this book. The description of the Bellman equation given previously offers a very high-level understanding of how value-based iteration works. The main difference is that in value-based iteration, the agent learns the expected reward value of each state-action pair, and in policy-based iteration, the agent learns the function that maps states to actions.
One simple way to describe this difference is that a value-based agent, when it has mastered its environment, will not explicitly be able to simulate that environment. It will not be able to give an actual function that maps states to actions. A policy-based agent, on the other hand, will be able to give that function.
Note that Q-learning and SARSA are both value-based algorithms. Because we are working with Q-learning in this book, we will not study policy-based iteration in detail here. The main thing to bear in mind about policy-based iteration is that it gives us the ability to learn stochastic policies and it is more useful for working with continuous action spaces.
- GNU-Linux Rapid Embedded Programming
- 計算機應用
- Practical Ansible 2
- 人工智能超越人類
- Blockchain Quick Start Guide
- Learning Social Media Analytics with R
- Windows 8應用開發實戰
- MicroPython Projects
- 四向穿梭式自動化密集倉儲系統的設計與控制
- 運動控制系統應用與實踐
- 步步圖解自動化綜合技能
- The Python Workshop
- 氣動系統裝調與PLC控制
- 電氣控制與PLC原理及應用(歐姆龍機型)
- Hands-On Dashboard Development with QlikView