- Hands-On Q-Learning with Python
- Nazia Habib
- 213字
- 2021-06-24 15:13:13
When to choose SARSA over Q-learning
As mentioned earlier, Q-learning and SARSA are very similar algorithms, and in fact, Q-learning is sometimes called SARSA-max. When the agent's policy is simply the greedy one (that is, it chooses the highest-valued action from the next state no matter what), Q-learning and SARSA will produce the same results.
In practice, we will not be using a simple greedy strategy and will instead choose something such as epsilon-greedy, where some of the actions are chosen at random. We will explore this in more depth when we discuss epsilon decay strategies further.
We can, therefore, think of SARSA as a more general version of Q-learning. The algorithms are very similar, and in practice, modifying a Q-learning implementation to SARSA involves nothing more than changing the update method for the Q-values. As we've seen, however, the difference in performance can be profound.
In many problems, SARSA will perform better than Q-learning, especially when there is a good chance that the agent will choose to take a random suboptimal action in the next step, as we explored in the cliff-walking example. In this case, Q-learning's assumption that the agent is following the optimal policy may be far enough from true that SARSA will converge faster and with fewer errors.
- Div+CSS 3.0網(wǎng)頁布局案例精粹
- Java開發(fā)技術(shù)全程指南
- 機(jī)艙監(jiān)測與主機(jī)遙控
- 機(jī)器學(xué)習(xí)流水線實(shí)戰(zhàn)
- 現(xiàn)代傳感技術(shù)
- Ceph:Designing and Implementing Scalable Storage Systems
- Learning C for Arduino
- LAMP網(wǎng)站開發(fā)黃金組合Linux+Apache+MySQL+PHP
- SAP Business Intelligence Quick Start Guide
- 工業(yè)機(jī)器人實(shí)操進(jìn)階手冊
- 生成對抗網(wǎng)絡(luò)項(xiàng)目實(shí)戰(zhàn)
- Cortex-M3嵌入式處理器原理與應(yīng)用
- 機(jī)器人制作入門(第4版)
- 信息系統(tǒng)安全保障評估
- SQL Server 2019 Administrator's Guide