- PyTorch 1.x Reinforcement Learning Cookbook
- Yuxi (Hayden) Liu
- 245字
- 2021-06-24 12:34:46
Simulating the FrozenLake environment
The optimal policies for the MDPs we have dealt with so far are pretty intuitive. However, it won't be that straightforward in most cases, such as the FrozenLake environment. In this recipe, let's play around with the FrozenLake environment and get ready for upcoming recipes where we will find its optimal policy.
FrozenLake is a typical Gym environment with a discrete state space. It is about moving an agent from the starting location to the goal location in a grid world, and at the same time avoiding traps. The grid is either four by four (https://gym.openai.com/envs/FrozenLake-v0/) or eight by eigh.
t (https://gym.openai.com/envs/FrozenLake8x8-v0/). The grid is made up of the following four types of tiles:
- S: The starting location
- G: The goal location, which terminates an episode
- F: The frozen tile, which is a walkable location
- H: The hole location, which terminates an episode
There are four actions, obviously: moving left (0), moving down (1), moving right (2), and moving up (3). The reward is +1 if the agent successfully reaches the goal location, and 0 otherwise. Also, the observation space is represented in a 16-dimensional integer array, and there are 4 possible actions (which makes sense).
What is tricky in this environment is that, as the ice surface is slippery, the agent won't always move in the direction it intends. For example, it may move to the left or to the right when it intends to move down.
- 高效能辦公必修課:Word圖文處理
- Learning Microsoft Azure Storage
- Getting Started with MariaDB
- 機器學(xué)習(xí)與大數(shù)據(jù)技術(shù)
- Learning Apache Cassandra(Second Edition)
- 城市道路交通主動控制技術(shù)
- PVCBOT機器人控制技術(shù)入門
- 空間機器人
- 筆記本電腦維修之電路分析基礎(chǔ)
- Hands-On DevOps
- 漢字錄入技能訓(xùn)練
- Redash v5 Quick Start Guide
- 30天學(xué)通Java Web項目案例開發(fā)
- 人工智能云平臺:原理、設(shè)計與應(yīng)用
- QTP自動化測試實踐