官术网_书友最值得收藏!

Relation between the value functions and state

The value function is an agent's estimate of how good a given state is. For instance, if a robot is near the edge of a cliff and may fall, that state is bad and must have a low value. On the other hand, if the robot/agent is near its final goal, that state is a good state to be in, as the rewards they will soon receive are high, and so that state will have a higher value.

The value function, V, is updated after reaching a st state and receiving a rt reward from the environment. The simplest TD learning algorithm is called TD(0) and performs an update using the following equation where α is the learning rate and 0 ≤ α ≤ 1:

Note that in some reference papers or books, the preceding formula will have rt instead of rt+1. This is just a difference in convention and is not an error; rt+1 here denotes the reward received from st state and transitioning to st+1

There is also another TD learning variant called TD(λ) that used eligibility traces e(s), which are a record of visiting a state. More formally, we perform a TD(λ) update as follows:

The eligibility traces are given by the following equation:

Here, e(s) = 0 at t = 0. For each step the agent takes, the eligibility trace decreases by γλ for all states, and is incremented by 1 for the state visited in the current time step. Here, 0 ≤ λ ≤ 1, and it is a parameter that decides how much of the credit from a reward is to be assigned to distant states. Next, we will look at the theory behind our next two RL algorithms, SARSA and Q-learning, both of which are quite popular in the RL community.

主站蜘蛛池模板: 平原县| 涟源市| 烟台市| 巴林右旗| 衡南县| 利川市| 福鼎市| 宁明县| 沂源县| 定兴县| 宜川县| 永新县| 尼木县| 胶州市| 米泉市| 浮山县| 德清县| 特克斯县| 司法| 吉林市| 峨边| 林西县| 清徐县| 法库县| 东台市| 长宁区| 永清县| 视频| 门源| 陕西省| 万载县| 平谷区| 财经| 常山县| 阿克| 赤城县| 楚雄市| 酒泉市| 清流县| 大关县| 蓬溪县|