官术网_书友最值得收藏!

Relation between the value functions and state

The value function is an agent's estimate of how good a given state is. For instance, if a robot is near the edge of a cliff and may fall, that state is bad and must have a low value. On the other hand, if the robot/agent is near its final goal, that state is a good state to be in, as the rewards they will soon receive are high, and so that state will have a higher value.

The value function, V, is updated after reaching a st state and receiving a rt reward from the environment. The simplest TD learning algorithm is called TD(0) and performs an update using the following equation where α is the learning rate and 0 ≤ α ≤ 1:

Note that in some reference papers or books, the preceding formula will have rt instead of rt+1. This is just a difference in convention and is not an error; rt+1 here denotes the reward received from st state and transitioning to st+1

There is also another TD learning variant called TD(λ) that used eligibility traces e(s), which are a record of visiting a state. More formally, we perform a TD(λ) update as follows:

The eligibility traces are given by the following equation:

Here, e(s) = 0 at t = 0. For each step the agent takes, the eligibility trace decreases by γλ for all states, and is incremented by 1 for the state visited in the current time step. Here, 0 ≤ λ ≤ 1, and it is a parameter that decides how much of the credit from a reward is to be assigned to distant states. Next, we will look at the theory behind our next two RL algorithms, SARSA and Q-learning, both of which are quite popular in the RL community.

主站蜘蛛池模板: 涟水县| 武冈市| 梁山县| 铁岭市| 弋阳县| 安顺市| 南溪县| 上蔡县| 德州市| 呼伦贝尔市| 皋兰县| 鲁甸县| 邢台县| 博乐市| 红桥区| 鄂州市| 新竹县| 额济纳旗| 昭觉县| 剑河县| 南溪县| 永康市| 荣昌县| 嘉善县| 乌苏市| 三亚市| 伊川县| 雅江县| 白银市| 东阿县| 泸定县| 台中市| 北宁市| 巢湖市| 汕头市| 惠安县| 招远市| 海阳市| 连山| 扬中市| 报价|