官术网_书友最值得收藏!

SARSA and the cliff-walking problem

In Q-learning, the agent starts out in state S, performs action A, sees what the highest possible reward is for taking any action from its new state, T, and updates its value for the state S-action A pair based on this new highest possible value. In SARSA, the agent starts in state S, takes action A and gets a reward, then moves to state T, takes action B and gets a reward, and then goes back to update the value for S-A based on the actual value of the reward it received from taking action B

A famous illustration of the differences in performance between Q-learning and SARSA is the cliff-walking example from Sutton and Barto's Reinforcement Learning: An Introduction (1998):

There is a penalty of -1 for each step that the agent takes, and a penalty of -100 for falling off the cliff. The optimal path is, therefore, to run exactly along the edge of the cliff and reach the reward as quickly as possible. This minimizes the number of steps the agent takes and maximizes its reward as long as it does not fall into the cliff at any point.

Q-learning takes the optimal path in this example, while SARSA takes the safe path. The result is that there is a nonzero risk (with an epsilon-greedy or other exploration-based policy) that at any point a Q-learning agent will fall off the cliff as a result of choosing exploration.

SARSA, unlike Q-learning, looks ahead to the next action to see what the agent will actually do at the next step and updates the Q-value of its current state-action pair accordingly. For this reason, it learns that the agent might fall into the cliff and that this would lead to a large negative reward, so it lowers the Q-values of those state-action pairs accordingly.

The result is that Q-learning assumes that the agent is following the best possible policy without attempting to resolve what that policy actually is, while SARSA takes into account the agent's actual policy (that is, what it ends up doing when it moves to the next state as opposed to the best possible thing it could be assumed to do).

主站蜘蛛池模板: 大关县| 永寿县| 达州市| 永修县| 抚顺县| 望城县| 邻水| 德格县| 海阳市| 浦县| 渭源县| 灵川县| 高台县| 博爱县| 哈密市| 阳新县| 杭锦后旗| 青铜峡市| 洛扎县| 达拉特旗| 原阳县| 辰溪县| 平陆县| 灵山县| 尼木县| 海兴县| 嘉鱼县| 贞丰县| 阆中市| 黔江区| 内江市| 乃东县| 炎陵县| 沈阳市| 绍兴县| 铜山县| 石首市| 惠安县| 鄄城县| 海南省| 綦江县|