官术网_书友最值得收藏!

How it works...

In this oversimplified study-sleep-game process, the optimal policy, that is, the policy that achieves the highest total reward, is choosing action a0 in all steps. However, it won't be that straightforward in most cases. Also, the actions taken in individual steps won't necessarily be the same. They are usually dependent on states. So, we will have to solve an MDP by finding the optimal policy in real-world cases.

The value function of a policy measures how good it is for an agent to be in each state, given the policy being followed. The greater the value, the better the state.

In Step 4, we calculated the value, V, of the optimal policy using matrix inversion. According to the Bellman Equation, the relationship between the value at step t+1 and that at step t can be expressed as follows:

When the value converges, which means Vt+1 = Vt, we can derive the value, V, as follows:

Here, I is the identity matrix with 1s on the main diagonal.

One advantage of solving an MDP with matrix inversion is that you always get an exact answer. But the downside is its scalability. As we need to compute the inversion of an m * m matrix (where m is the number of possible states), the computation will become costly if there is a large number of states.

主站蜘蛛池模板: 定陶县| 峨眉山市| 泽普县| 巴林右旗| 沧源| 云梦县| 泽州县| 阿拉善盟| 象山县| 根河市| 碌曲县| 潢川县| 周口市| 峨山| 定安县| 静安区| 屏山县| 荆州市| 翁牛特旗| 垦利县| 女性| 武宣县| 茌平县| 高陵县| 庆云县| 徐闻县| 包头市| 平和县| 临洮县| 罗城| 翁牛特旗| 昌黎县| 林西县| 岗巴县| 盐源县| 屏山县| 永丰县| 台东市| 天柱县| 溧水县| 新河县|