- Python Reinforcement Learning Projects
- Sean Saito Yang Wenzhuo Rajalingappaa Shanmugamani
- 243字
- 2021-07-23 19:05:06
Running an environment
Any gym environment can be initialized and run by using a simple interface. Let's start by importing the gym library, as follows:
- First we import the gym library:
import gym
- Next, create an environment by passing an argument to gym.make. In the following code, CartPole is used as an example:
environment = gym.make('CartPole-v0')
- Next, reset the environment:
environment.reset()
- Then, start an iteration and render the environment, as follows:
for dummy in range(100):
environment.render()
environment.step(environment.action_space.sample())
Also, change the action space at every step, to see CartPole moving. Running the preceding program should produce a visualization. The scene should start with a visualization, as follows:

The preceding image is called a CartPole. The CartPole is made up of a cart that can move horizontally and a pole that can move rotationally, with respect to the center of the cart.
The pole is pivoted to the cart. After some time, you will notice that the pole is falling to one side, as shown in the following image:

After a few more iterations, the pole will swing back, as shown in the following image. All of the movements are constrained by the laws of physics. The steps are taken randomly:

Other environments can be seen in a similar way, by replacing the argument of the gym environment, such as MsPacman-v0 or MountrainCar-v0. For other environments, other licenses may be required. Next, we will go through the rest of the environments.