官术网_书友最值得收藏!

Deriving the Bellman equation for value and Q functions

Now let us see how to derive Bellman equations for value and Q functions.

You can skip this section if you are not interested in mathematics; however, the math will be super intriguing.

First, we define,  as a transition probability of moving from state  to  while performing an action a:

We define  as a reward probability received by moving from state  to  while performing an action a:

             from (2)    ---(5)

We know that the value function can be represented as:

 from (1)

We can rewrite our value function by taking the first reward out:

  ---(6)

The expectations in the value function specifies the expected return if we are in the state s, performing an action a with policy π.

So, we can rewrite our expectation explicitly by summing up all possible actions and rewards as follows:

In the RHS, we will substitute  from equation (5) as follows:

Similarly, in the LHS, we will substitute the value of rt+1 from equation (2) as follows:

So, our final expectation equation becomes:

  ---(7)

Now we will substitute our expectation (7) in value function (6) as follows:

Instead of , we can substitute  with equation (6), and our final value function looks like the following:

In very similar fashion, we can derive a Bellman equation for the Q function; the final equation is as follows:

Now that we have a Bellman equation for both the value and Q function, we will see how to find the optimal policies.

主站蜘蛛池模板: 兴国县| 墨玉县| 武功县| 江津市| 赤水市| 浏阳市| 德阳市| 南阳市| 泰州市| 阿勒泰市| 巨野县| 广宗县| 岑巩县| 松滋市| 乐亭县| 甘肃省| 渝北区| 清流县| 巨鹿县| 宜章县| 巴林左旗| 乐业县| 区。| 信丰县| 禹城市| 万源市| 永州市| 宁远县| 潮州市| 酉阳| 巢湖市| 连云港市| 潍坊市| 林州市| 海阳市| 三门峡市| 仁布县| 灵石县| 九江县| 汶上县| 通州区|