官术网_书友最值得收藏!

Value function

The second component an agent can have is called the value function. As mentioned previously, it is useful to assess your position, good or bad, in a given state. In a game of chess, a player would like to know the likelihood that they are going to win in a board state. An agent navigating a maze would like to know how close it is to the destination. The value function serves this purpose; it predicts the expected future reward an agent would receive in a given state. In other words, it measures whether a given state is desirable for the agent. More formally, the value function takes a state and a policy as input and returns a scalar value representing the expected cumulative reward:

Take our maze example, and suppose the agent receives a reward of -1 for every step it takes. The agent's goal is to finish the maze in the smallest number of steps possible. The value of each state can be represented as follows:

Figure 3: A maze where each square indicates the value of being in the state

Each square basically represents the number of steps it takes to get to the end of the maze. As you can see, the smallest number of steps required to reach the goal is 15.

How can the value function help an agent perform a task well, other than informing us of how desirable a given state is? As we will see in the following sections, value functions play an integral role in predicting how well a sequence of actions will do even before the agent performs them. This is similar to chess players imagining how well a sequence of future actions will do in improving his or her  chances of winning. To do this, the agent also needs to have an understanding of how the environment works. This is where the third component of an agent, the model, becomes relevant.

主站蜘蛛池模板: 伽师县| 东方市| 治多县| 福清市| 华亭县| 略阳县| 正定县| 通州市| 双辽市| 蕲春县| 桂东县| 青阳县| 兰西县| 杨浦区| 磐安县| 西青区| 藁城市| 景东| 江华| 安顺市| 鸡西市| 固安县| 曲阳县| 晴隆县| 长岛县| 崇文区| 内黄县| 墨玉县| 九江市| 青铜峡市| 兴山县| 东乡族自治县| 三江| 郸城县| 陇南市| 玛多县| 睢宁县| 罗定市| 东乌| 定远县| 璧山县|