官术网_书友最值得收藏!

Markov Decision Process

The Markov decision process, better known as MDP, is an approach in reinforcement learning to take decisions in a gridworld environment. A gridworld environment consists of states in the form of grids, such as the one in the FrozenLake-v0 environment from OpenAI gym, which we tried to examine and solve in the last chapter.

The MDP tries to capture a world in the form of a grid by dividing it into states, actions, models/transition models, and rewards. The solution to an MDP is called a policy and the objective is to find the optimal policy for that MDP task.

Thus, any reinforcement learning task composed of a set of states, actions, and rewards that follows the Markov property would be considered an MDP.

In this chapter, we will dig deep into MDPs, states, actions, rewards, policies, and how to solve them using Bellman equations. Moreover, we will cover the basics of Partially Observable MDP and their complexity in solving. We will also cover the exploration-exploitation dilemma and the famous E3 (explicit, explore, or exploit) algorithm. Then we will come to the fascinating part, where we will program an agent to learn and play pong using the principles of MDP.

We will cover the following topics in this chapter:

  • Markov decision processes
  • Partially observable Markov decision processes
  • Training the FrozenLake-v0 environment using MDP
主站蜘蛛池模板: 绥德县| 秦皇岛市| 阿尔山市| 营口市| 外汇| 南江县| 灌南县| 仲巴县| 东宁县| 兴文县| 孟村| 咸宁市| 扶余县| 莒南县| 海南省| 肥城市| 遵义县| 台南市| 军事| 西宁市| 凌海市| 洪洞县| 宁夏| 阜平县| 佛学| 阜新| 清镇市| 临泉县| 依安县| 博客| 红桥区| 吴忠市| 远安县| 东辽县| 丽江市| 沾益县| 宁津县| 门头沟区| 仪征市| 定边县| 德安县|