The value function is an agent's estimate of how good a given state is. For instance, if a robot is near the edge of a cliff and may fall, that state is bad and must have a low value. On the other hand, if the robot/agent is near its final goal, that state is a good state to be in, as the rewards they will soon receive are high, and so that state will have a higher value.
The value function, V, is updated after reaching a ststate and receiving a rtreward from the environment. The simplest TD learning algorithm is called TD(0) and performs an update using the following equation where α is the learning rate and 0 ≤ α ≤ 1:
Note that in some reference papers or books, the preceding formula will have rt instead of rt+1. This is just a difference in convention and is not an error; rt+1here denotes the reward received from st stateand transitioning to st+1.
There is also another TD learning variant called TD(λ) that used eligibility traces e(s), which are a record of visiting a state. More formally, we perform a TD(λ) update as follows:
The eligibility traces are given by the following equation:
Here, e(s) = 0 at t = 0. For each step the agent takes, the eligibility trace decreases by γλ for all states, and is incremented by 1 for the state visited in the current time step. Here, 0 ≤ λ ≤ 1, and it is a parameter that decides how much of the credit from a reward is to be assigned to distant states. Next, we will look at the theory behind our next two RL algorithms, SARSA and Q-learning, both of which are quite popular in the RL community.