- TensorFlow Reinforcement Learning Quick Start Guide
- Kaushik Balakrishnan
- 195字
- 2021-06-24 15:29:10
Understanding Q-learning
Q-learning is an off-policy algorithm that was first proposed by Christopher Watkins in 1989, and is a widely used RL algorithm. Q-learning, such as SARSA, keeps an update of the state-action value function for each state-action pair, and recursively updates it using the Bellman equation of dynamic programming as new experiences are collected. Note that it is an off-policy algorithm as it uses the state-action value function evaluated at the action, which will maximize the value. Q-learning is used for problems where the actions are discrete – for example, if we have the actions move north, move south, move east, move west, and we are to decide the optimum action in a given state, then Q-learning is applicable in such settings.
In the classical Q-learning approach, the update is given as follows, where the max is performed over actions, that is, we choose the action a corresponding to the maximum value of Q at state st+1:

The α is the learning rate, which is a hyper-parameter that the user can specify.
Before we code the algorithms in Python, let's find out what kind of problems will be considered.
- Mastering Spark for Data Science
- 網絡服務器架設(Windows Server+Linux Server)
- 商戰數據挖掘:你需要了解的數據科學與分析思維
- ServiceNow Cookbook
- Hands-On Cybersecurity with Blockchain
- WordPress Theme Development Beginner's Guide(Third Edition)
- 悟透AutoCAD 2009完全自學手冊
- Visual Basic.NET程序設計
- Mastering Text Mining with R
- 電腦上網入門
- 空間機器人智能感知技術
- Mastering MongoDB 4.x
- MySQL Management and Administration with Navicat
- Hands-On Data Analysis with Scala
- KUKA工業機器人與西門子S7-1200 PLC技術及應用