官术网_书友最值得收藏!

Relation between the value functions and state

The value function is an agent's estimate of how good a given state is. For instance, if a robot is near the edge of a cliff and may fall, that state is bad and must have a low value. On the other hand, if the robot/agent is near its final goal, that state is a good state to be in, as the rewards they will soon receive are high, and so that state will have a higher value.

The value function, V, is updated after reaching a st state and receiving a rt reward from the environment. The simplest TD learning algorithm is called TD(0) and performs an update using the following equation where α is the learning rate and 0 ≤ α ≤ 1:

Note that in some reference papers or books, the preceding formula will have rt instead of rt+1. This is just a difference in convention and is not an error; rt+1 here denotes the reward received from st state and transitioning to st+1

There is also another TD learning variant called TD(λ) that used eligibility traces e(s), which are a record of visiting a state. More formally, we perform a TD(λ) update as follows:

The eligibility traces are given by the following equation:

Here, e(s) = 0 at t = 0. For each step the agent takes, the eligibility trace decreases by γλ for all states, and is incremented by 1 for the state visited in the current time step. Here, 0 ≤ λ ≤ 1, and it is a parameter that decides how much of the credit from a reward is to be assigned to distant states. Next, we will look at the theory behind our next two RL algorithms, SARSA and Q-learning, both of which are quite popular in the RL community.

主站蜘蛛池模板: 昭通市| 涿州市| 高陵县| 乾安县| 颍上县| 鸡西市| 玉龙| 井研县| 杭锦旗| 榆中县| 玉山县| 卓资县| 商河县| 南溪县| 衢州市| 雅江县| 沁水县| 靖江市| 凤山市| 浦江县| 金阳县| 吉隆县| 高雄县| 历史| 闵行区| 新安县| 琼结县| 英德市| 黄石市| 十堰市| 宜春市| 沅江市| 松阳县| 杭锦旗| 西藏| 通州区| 榆林市| 城口县| 怀宁县| 陈巴尔虎旗| 上犹县|