官术网_书友最值得收藏!

  • Python Reinforcement Learning
  • Sudharsan Ravichandiran Sean Saito Rajalingappaa Shanmugamani Yang Wenzhuo
  • 180字
  • 2021-06-24 15:17:27

Basic simulations

Let's see how to simulate a basic cart pole environment:

  1. First, let's import the library:
import gym
  1. The next step is to create a simulation instance using the make function:
env = gym.make('CartPole-v0')
  1. Then we should initialize the environment using the reset method:
env.reset()
  1. Then we can loop for some time steps and render the environment at each step:
for _ in range(1000):
env.render()
env.step(env.action_space.sample())

The complete code is as follows:

import gym 
env = gym.make(
'CartPole-v0')
env.reset()
for _ in range(1000):
env.render()
env.step(env.action_space.sample())

If you run the preceding program, you can see the output, which shows the cart pole environment:

OpenAI Gym provides a lot of simulation environments for training, evaluating, and building our agents. We can check the available environments by either checking their website or simply typing the following, which will list the available environments:

from gym import envs
print(envs.registry.all())

Since Gym provides different interesting environments, let's simulate a car racing environment, shown as follows:

import gym
env = gym.make('CarRacing-v0')
env.reset()
for _ in range(1000):
env.render()
env.step(env.action_space.sample())

You will get the output as follows:

主站蜘蛛池模板: 永福县| 峨山| 临沂市| 印江| 灌阳县| 周口市| 三江| 中方县| 兰州市| 常德市| 奉化市| 凤庆县| 都昌县| 马龙县| 通化县| 洪雅县| 玛纳斯县| 吉安市| 华蓥市| 社会| 九江县| 南漳县| 罗江县| 新宁县| 盘锦市| 正镶白旗| 岳阳县| 宜宾县| 玛曲县| 福泉市| 扶绥县| 儋州市| 衡阳县| 永丰县| 原平市| 银川市| 来凤县| 大洼县| 财经| 商丘市| 磐安县|