- Hands-On Q-Learning with Python
- Nazia Habib
- 213字
- 2021-06-24 15:13:13
When to choose SARSA over Q-learning
As mentioned earlier, Q-learning and SARSA are very similar algorithms, and in fact, Q-learning is sometimes called SARSA-max. When the agent's policy is simply the greedy one (that is, it chooses the highest-valued action from the next state no matter what), Q-learning and SARSA will produce the same results.
In practice, we will not be using a simple greedy strategy and will instead choose something such as epsilon-greedy, where some of the actions are chosen at random. We will explore this in more depth when we discuss epsilon decay strategies further.
We can, therefore, think of SARSA as a more general version of Q-learning. The algorithms are very similar, and in practice, modifying a Q-learning implementation to SARSA involves nothing more than changing the update method for the Q-values. As we've seen, however, the difference in performance can be profound.
In many problems, SARSA will perform better than Q-learning, especially when there is a good chance that the agent will choose to take a random suboptimal action in the next step, as we explored in the cliff-walking example. In this case, Q-learning's assumption that the agent is following the optimal policy may be far enough from true that SARSA will converge faster and with fewer errors.
- Google Cloud Platform Cookbook
- Mastering D3.js
- Expert AWS Development
- ROS機(jī)器人編程與SLAM算法解析指南
- 最后一個(gè)人類(lèi)
- 城市道路交通主動(dòng)控制技術(shù)
- AWS Administration Cookbook
- 傳感器與物聯(lián)網(wǎng)技術(shù)
- 大數(shù)據(jù)驅(qū)動(dòng)的設(shè)備健康預(yù)測(cè)及維護(hù)決策優(yōu)化
- 筆記本電腦維修90個(gè)精選實(shí)例
- 精通數(shù)據(jù)科學(xué):從線性回歸到深度學(xué)習(xí)
- 重估:人工智能與賦能社會(huì)
- 傳感器原理與工程應(yīng)用
- Hands-On Deep Learning with Go
- 設(shè)計(jì)模式