官术网_书友最值得收藏!

SARSA and the cliff-walking problem

In Q-learning, the agent starts out in state S, performs action A, sees what the highest possible reward is for taking any action from its new state, T, and updates its value for the state S-action A pair based on this new highest possible value. In SARSA, the agent starts in state S, takes action A and gets a reward, then moves to state T, takes action B and gets a reward, and then goes back to update the value for S-A based on the actual value of the reward it received from taking action B

A famous illustration of the differences in performance between Q-learning and SARSA is the cliff-walking example from Sutton and Barto's Reinforcement Learning: An Introduction (1998):

There is a penalty of -1 for each step that the agent takes, and a penalty of -100 for falling off the cliff. The optimal path is, therefore, to run exactly along the edge of the cliff and reach the reward as quickly as possible. This minimizes the number of steps the agent takes and maximizes its reward as long as it does not fall into the cliff at any point.

Q-learning takes the optimal path in this example, while SARSA takes the safe path. The result is that there is a nonzero risk (with an epsilon-greedy or other exploration-based policy) that at any point a Q-learning agent will fall off the cliff as a result of choosing exploration.

SARSA, unlike Q-learning, looks ahead to the next action to see what the agent will actually do at the next step and updates the Q-value of its current state-action pair accordingly. For this reason, it learns that the agent might fall into the cliff and that this would lead to a large negative reward, so it lowers the Q-values of those state-action pairs accordingly.

The result is that Q-learning assumes that the agent is following the best possible policy without attempting to resolve what that policy actually is, while SARSA takes into account the agent's actual policy (that is, what it ends up doing when it moves to the next state as opposed to the best possible thing it could be assumed to do).

主站蜘蛛池模板: 龙川县| 白玉县| 宝山区| 龙南县| 万年县| 历史| 来安县| 大城县| 大余县| 桂平市| 略阳县| 大同县| 宁都县| 恩平市| 玛多县| 望江县| 南郑县| 临泽县| 民和| 翼城县| 乳山市| 湘阴县| 绵阳市| 鹿泉市| 瓦房店市| 长兴县| 许昌县| 贵溪市| 日照市| 依安县| 开阳县| 肇庆市| 朝阳市| 葫芦岛市| 南皮县| 普安县| 兰西县| 磴口县| 钟山县| 鄂伦春自治旗| 伊川县|