- The Reinforcement Learning Workshop
- Alessandro Palmas Emanuele Ghelfi Dr. Alexandra Galina Petre Mayur Kulkarni Anand N.S. Quan Nguyen Aritra Sen Anthony So Saikat Basak
- 424字
- 2021-06-11 18:37:44
Introduction
In the previous chapter, we studied the main elements of Reinforcement Learning (RL). We described an agent as an entity that can perceive an environment's state and act by modifying the environment state in order to achieve a goal. An agent acts through a policy that represents its behavior, and the way the agent selects an action is based on the environment state. In the second half of the previous chapter, we introduced Gym and Baselines, two Python libraries that simplify the environment representation and the algorithm implementation, respectively.
We mentioned that RL considers problems as Markov Decision Processes (MDPs), without entering into the details and without giving a formal definition.
In this chapter, we will formally describe what an MDP is, its properties, and its characteristics. When facing a new problem in RL, we have to ensure that the problem can be formalized as an MDP; otherwise, applying RL techniques is impossible.
Before presenting a formal definition of MDPs, we need to understand Markov Chains (MCs) and Markov Reward Processes (MRPs). MCs and MRPs are specific cases (simplified) of MDPs. An MC only focuses on state transitions without modeling rewards and actions. Consider the example of the game of snakes and ladders, where the next action is completely dependent on the number displayed on the dice. MRPs also include the reward component in the state transition. MRPs and MCs are useful in understanding the characteristics of MDPs gradually. We will be looking at specific examples of MCs and MRPs later in the chapter.
Along with MDPs, this chapter also presents the concepts of the state-value function and the action-value function, which are used to evaluate how good a state is for an agent and how good an action taken in a given state is. State-value functions and action-value functions are the building blocks of the algorithms used to solve real-world problems. The concepts of state-value functions and action-value functions are highly related to the agent's policy and the environment dynamics, as we will learn later in this chapter.
The final part of this chapter presents two Bellman equations, namely the Bellman expectation equation and the Bellman optimality equation. These equations are helpful in the context of RL in order to evaluate the behavior of an agent and find a policy that maximizes the agent's performance in an MDP.
In this chapter, we will practice with some MDP examples, such as the student MDP and Gridworld. We will implement the solution methods and equations explained in this chapter using Python, SciPy, and NumPy.
- ATmega16單片機項目驅(qū)動教程
- Android NDK Game Development Cookbook
- Effective STL中文版:50條有效使用STL的經(jīng)驗(雙色)
- 嵌入式技術(shù)基礎(chǔ)與實踐(第5版)
- 單片機開發(fā)與典型工程項目實例詳解
- 圖解計算機組裝與維護(hù)
- 基于網(wǎng)絡(luò)化教學(xué)的項目化單片機應(yīng)用技術(shù)
- 觸摸屏應(yīng)用技術(shù)從入門到精通
- FPGA實驗實訓(xùn)教程
- Building Machine Learning Systems with Python
- 微控制器的應(yīng)用
- 計算機組裝、維護(hù)與維修項目教程
- Corona SDK Mobile Game Development:Beginner's Guide
- UML精粹:標(biāo)準(zhǔn)對象建模語言簡明指南(第3版)
- The Deep Learning Workshop