官术网_书友最值得收藏!

Markov Decision Process

The Markov decision process, better known as MDP, is an approach in reinforcement learning to take decisions in a gridworld environment. A gridworld environment consists of states in the form of grids, such as the one in the FrozenLake-v0 environment from OpenAI gym, which we tried to examine and solve in the last chapter.

The MDP tries to capture a world in the form of a grid by dividing it into states, actions, models/transition models, and rewards. The solution to an MDP is called a policy and the objective is to find the optimal policy for that MDP task.

Thus, any reinforcement learning task composed of a set of states, actions, and rewards that follows the Markov property would be considered an MDP.

In this chapter, we will dig deep into MDPs, states, actions, rewards, policies, and how to solve them using Bellman equations. Moreover, we will cover the basics of Partially Observable MDP and their complexity in solving. We will also cover the exploration-exploitation dilemma and the famous E3 (explicit, explore, or exploit) algorithm. Then we will come to the fascinating part, where we will program an agent to learn and play pong using the principles of MDP.

We will cover the following topics in this chapter:

  • Markov decision processes
  • Partially observable Markov decision processes
  • Training the FrozenLake-v0 environment using MDP
主站蜘蛛池模板: 云安县| 秦安县| 泰安市| 西丰县| 樟树市| 罗田县| 赤水市| 威信县| 武川县| 崇礼县| 汽车| 寿阳县| 抚松县| 潞城市| 定兴县| 会东县| 陇川县| 土默特右旗| 仁寿县| 西乡县| 介休市| 北宁市| 华宁县| 象山县| 亳州市| 怀来县| 得荣县| 济阳县| 叶城县| 澄迈县| 依安县| 图木舒克市| 焉耆| 湖州市| 南昌市| 大悟县| 玛纳斯县| 台东市| 永福县| 合水县| 涡阳县|