官术网_书友最值得收藏!

SARSA and the cliff-walking problem

In Q-learning, the agent starts out in state S, performs action A, sees what the highest possible reward is for taking any action from its new state, T, and updates its value for the state S-action A pair based on this new highest possible value. In SARSA, the agent starts in state S, takes action A and gets a reward, then moves to state T, takes action B and gets a reward, and then goes back to update the value for S-A based on the actual value of the reward it received from taking action B

A famous illustration of the differences in performance between Q-learning and SARSA is the cliff-walking example from Sutton and Barto's Reinforcement Learning: An Introduction (1998):

There is a penalty of -1 for each step that the agent takes, and a penalty of -100 for falling off the cliff. The optimal path is, therefore, to run exactly along the edge of the cliff and reach the reward as quickly as possible. This minimizes the number of steps the agent takes and maximizes its reward as long as it does not fall into the cliff at any point.

Q-learning takes the optimal path in this example, while SARSA takes the safe path. The result is that there is a nonzero risk (with an epsilon-greedy or other exploration-based policy) that at any point a Q-learning agent will fall off the cliff as a result of choosing exploration.

SARSA, unlike Q-learning, looks ahead to the next action to see what the agent will actually do at the next step and updates the Q-value of its current state-action pair accordingly. For this reason, it learns that the agent might fall into the cliff and that this would lead to a large negative reward, so it lowers the Q-values of those state-action pairs accordingly.

The result is that Q-learning assumes that the agent is following the best possible policy without attempting to resolve what that policy actually is, while SARSA takes into account the agent's actual policy (that is, what it ends up doing when it moves to the next state as opposed to the best possible thing it could be assumed to do).

主站蜘蛛池模板: 万州区| 昭觉县| 乌鲁木齐市| 佛冈县| 平乡县| 江油市| 平定县| 兴和县| 天等县| 中宁县| 东丽区| 株洲市| 海淀区| 瑞金市| 平昌县| 长治县| 繁昌县| 韶山市| 昌都县| 宕昌县| 安新县| 林西县| 西吉县| 科尔| 枣庄市| 宜阳县| 钟山县| 长葛市| 松江区| 兰考县| 太谷县| 潞城市| 屏边| 独山县| 西充县| 河南省| 鹤岗市| 泰安市| 土默特左旗| 平泉县| 波密县|