- Deep Reinforcement Learning Hands-On
- Maxim Lapan
- 351字
- 2021-06-25 20:46:52
The random CartPole agent
Although the environment is much more complex than our first example in The anatomy of the agent section, the code of the agent is much shorter. This is the power of reusability, abstractions, and third-party libraries!
So, here is the code (you can find it in Chapter02/02_cartpole_random.py
):
import gym if __name__ == "__main__": env = gym.make("CartPole-v0") total_reward = 0.0 total_steps = 0 obs = env.reset()
Here, we create the environment and initialize the counter of steps and the reward accumulator. On the last line, we reset the environment to obtain the first observation (which we'll not use, as our agent is stochastic):
while True: action = env.action_space.sample() obs, reward, done, _ = env.step(action) total_reward += reward total_steps += 1 if done: break print("Episode done in %d steps, total reward %.2f" % (total_steps, total_reward))
In this loop, we sample a random action, then ask the environment to execute it and return to us the next observation(obs
), the reward
, and the done
flag. If the episode is over, we stop the loop and show how many steps we've done and how much reward has been accumulated. If you start this example, you'll see something like this (not exactly, due to the agent's randomness):
rl_book_samples/Chapter02$ python 02_cartpole_random.py WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. Episode done in 12 steps, total reward 12.00
As with the interactive session, the warning is not related to our code, but to Gym's internals. On average, our random agent makes 12–15 steps before the pole falls and the episode ends. Most of the environments in Gym have a "reward boundary," which is the average reward that the agent should gain during 100 consecutive episodes to "solve" the environment. For CartPole, this boundary is 195, which means that on average, the agent must hold the stick during 195-time steps or longer. Using this perspective, our random agent's performance looks poor. However, don't be disappointed too early, because we are just at the beginning, and soon we will solve CartPole and many other much more interesting and challenging environments.
- Python Artificial Intelligence Projects for Beginners
- Hands-On Cybersecurity with Blockchain
- 分布式多媒體計算機系統
- 機器人創新實訓教程
- 大數據處理平臺
- 3D Printing for Architects with MakerBot
- 大數據驅動的設備健康預測及維護決策優化
- Mastering Game Development with Unreal Engine 4(Second Edition)
- 單片機C語言程序設計完全自學手冊
- Apache源代碼全景分析(第1卷):體系結構與核心模塊
- Ansible 2 Cloud Automation Cookbook
- Learn Microsoft Azure
- 企業級Web開發實戰
- 自適應學習:人工智能時代的教育革命
- 計算機硬件技術基礎學習指導與練習