官术网_书友最值得收藏!

The random CartPole agent

Although the environment is much more complex than our first example in The anatomy of the agent section, the code of the agent is much shorter. This is the power of reusability, abstractions, and third-party libraries!

So, here is the code (you can find it in Chapter02/02_cartpole_random.py):

import gym

if __name__ == "__main__":
    env = gym.make("CartPole-v0")
    total_reward = 0.0
    total_steps = 0
    obs = env.reset()

Here, we create the environment and initialize the counter of steps and the reward accumulator. On the last line, we reset the environment to obtain the first observation (which we'll not use, as our agent is stochastic):

   while True:
        action = env.action_space.sample()
        obs, reward, done, _ = env.step(action)
        total_reward += reward
        total_steps += 1
        if done:
            break

   print("Episode done in %d steps, total reward %.2f" % (total_steps, total_reward))

In this loop, we sample a random action, then ask the environment to execute it and return to us the next observation(obs), the reward, and the done flag. If the episode is over, we stop the loop and show how many steps we've done and how much reward has been accumulated. If you start this example, you'll see something like this (not exactly, due to the agent's randomness):

rl_book_samples/Chapter02$ python 02_cartpole_random.py
WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.
Episode done in 12 steps, total reward 12.00

As with the interactive session, the warning is not related to our code, but to Gym's internals. On average, our random agent makes 12–15 steps before the pole falls and the episode ends. Most of the environments in Gym have a "reward boundary," which is the average reward that the agent should gain during 100 consecutive episodes to "solve" the environment. For CartPole, this boundary is 195, which means that on average, the agent must hold the stick during 195-time steps or longer. Using this perspective, our random agent's performance looks poor. However, don't be disappointed too early, because we are just at the beginning, and soon we will solve CartPole and many other much more interesting and challenging environments.

主站蜘蛛池模板: 肃南| 池州市| 玉林市| 屏山县| 临颍县| 遵义县| 宁蒗| 芦山县| 石屏县| 盘山县| 金平| 巴青县| 南充市| 凤山县| 广平县| 永泰县| 礼泉县| 县级市| 柘荣县| 丹阳市| 雷山县| 永春县| 合江县| 乌拉特中旗| 米林县| 神农架林区| 上高县| 商南县| 祁门县| 遂溪县| 平定县| 汕头市| 大名县| 沁水县| 胶南市| 弋阳县| 商河县| 南城县| 宜兰县| 会泽县| 莫力|