官术网_书友最值得收藏!

Creating an MDP

Developed upon the Markov chain, an MDP involves an agent and a decision-making process. Let's go ahead with developing an MDP and calculating the value function under the optimal policy.

Besides a set of possible states, S = {s0, s1, ... , sm}, an MDP is defined by a set of actions, A = {a0, a1, ... , an}; a transition model, T(s, a, s'); a reward function, R(s); and a discount factor, ??. The transition matrix, T(s, a, s'), contains the probabilities of taking action a from state s then landing in s'. The discount factor, ??, controls the tradeoff between future rewards and immediate ones.

To make our MDP slightly more complicated, we extend the study and sleep process with one more state, s2 play games. Let's say we have two actions, a0 work and a1 slack. The 3 * 2 * 3 transition matrix T(s, a, s') is as follows:

This means, for example, that when taking the a1 slack action from state s0 study, there is a 60% chance that it will become s1 sleep (maybe getting tired ) and a 30% chance that it will become s2 play games (maybe wanting to relax ), and that there is a 10% chance of keeping on studying (maybe a true workaholic ). We define the reward function as [+1, 0, -1] for three states, to compensate for the hard work. Obviously, the optimal policy, in this case, is choosing a0 work for each step (keep on studying – no pain no gain, right?). Also, we choose 0.5 as the discount factor, to begin with. In the next section, we will compute the state-value function (also called the value function, just the value for short, or expected utility) under the optimal policy.

主站蜘蛛池模板: 二手房| 佛教| 海宁市| 凤山市| 琼结县| 东源县| 芜湖市| 承德县| 来宾市| 随州市| 筠连县| 山阳县| 左贡县| 丽水市| 高阳县| 昆明市| 吴江市| 波密县| 塔河县| 平塘县| 巴彦县| 陇川县| 仪征市| 襄樊市| 宽城| 司法| 塔城市| 彰武县| 松潘县| 上林县| 遵义市| 澄城县| 油尖旺区| 德令哈市| 九寨沟县| 剑河县| 祁连县| 聂荣县| 安丘市| 靖江市| 观塘区|