- PyTorch 1.x Reinforcement Learning Cookbook
- Yuxi (Hayden) Liu
- 245字
- 2021-06-24 12:34:46
Simulating the FrozenLake environment
The optimal policies for the MDPs we have dealt with so far are pretty intuitive. However, it won't be that straightforward in most cases, such as the FrozenLake environment. In this recipe, let's play around with the FrozenLake environment and get ready for upcoming recipes where we will find its optimal policy.
FrozenLake is a typical Gym environment with a discrete state space. It is about moving an agent from the starting location to the goal location in a grid world, and at the same time avoiding traps. The grid is either four by four (https://gym.openai.com/envs/FrozenLake-v0/) or eight by eigh.
t (https://gym.openai.com/envs/FrozenLake8x8-v0/). The grid is made up of the following four types of tiles:
- S: The starting location
- G: The goal location, which terminates an episode
- F: The frozen tile, which is a walkable location
- H: The hole location, which terminates an episode
There are four actions, obviously: moving left (0), moving down (1), moving right (2), and moving up (3). The reward is +1 if the agent successfully reaches the goal location, and 0 otherwise. Also, the observation space is represented in a 16-dimensional integer array, and there are 4 possible actions (which makes sense).
What is tricky in this environment is that, as the ice surface is slippery, the agent won't always move in the direction it intends. For example, it may move to the left or to the right when it intends to move down.
- Dreamweaver CS3 Ajax網頁設計入門與實例詳解
- 亮劍.NET:.NET深入體驗與實戰精要
- Natural Language Processing Fundamentals
- WOW!Illustrator CS6完全自學寶典
- 讓每張照片都成為佳作的Photoshop后期技法
- Blender Compositing and Post Processing
- 空間站多臂機器人運動控制研究
- 激光選區熔化3D打印技術
- 精通LabVIEW程序設計
- Visual Studio 2010 (C#) Windows數據庫項目開發
- Learn QGIS
- 基于Proteus的單片機應用技術
- Mastering OpenStack(Second Edition)
- Practical Network Automation
- CPLD/FPGA技術應用