- PyTorch 1.x Reinforcement Learning Cookbook
- Yuxi (Hayden) Liu
- 361字
- 2021-06-24 12:34:41
There's more...
We can also plot the total reward for every episode in the training phase:
>>> import matplotlib.pyplot as plt
>>> plt.plot(total_rewards)
>>> plt.xlabel('Episode')
>>> plt.ylabel('Reward')
>>> plt.show()
This will generate the following plot:

If you have not installed matplotlib, you can do so via the following command:
conda install matplotlib
We can see that the reward for each episode is pretty random, and that there is no trend of improvement as we go through the episodes. This is basically what we expected.
In the plot of reward versus episodes, we can see that there are some episodes in which the reward reaches 200. We can end the training phase whenever this occurs since there is no room to improve. Incorporating this change, we now have the following for the training phase:
>>> n_episode = 1000
>>> best_total_reward = 0
>>> best_weight = None
>>> total_rewards = []
>>> for episode in range(n_episode):
... weight = torch.rand(n_state, n_action)
... total_reward = run_episode(env, weight)
... print('Episode {}: {}'.format(episode+1, total_reward))
... if total_reward > best_total_reward:
... best_weight = weight
... best_total_reward = total_reward
... total_rewards.append(total_reward)
... if best_total_reward == 200:
... break
Episode 1: 9.0
Episode 2: 8.0
Episode 3: 10.0
Episode 4: 10.0
Episode 5: 10.0
Episode 6: 9.0
Episode 7: 17.0
Episode 8: 10.0
Episode 9: 43.0
Episode 10: 10.0
Episode 11: 10.0
Episode 12: 106.0
Episode 13: 8.0
Episode 14: 32.0
Episode 15: 98.0
Episode 16: 10.0
Episode 17: 200.0
The policy achieving the maximal reward is found in episode 17. Again, this may vary a lot because the weights are generated randomly for each episode. To compute the expectation of training episodes needed, we can repeat the preceding training process 1,000 times and take the average of the training episodes:
>>> n_training = 1000
>>> n_episode_training = []
>>> for _ in range(n_training):
... for episode in range(n_episode):
... weight = torch.rand(n_state, n_action)
... total_reward = run_episode(env, weight)
... if total_reward == 200:
... n_episode_training.append(episode+1)
... break
>>> print('Expectation of training episodes needed: ',
sum(n_episode_training) / n_training)
Expectation of training episodes needed: 13.442
On average, we expect that it takes around 13 episodes to find the best policy.
- 平面設計初步
- Visual FoxPro 6.0數據庫與程序設計
- Cloud Analytics with Microsoft Azure
- Mobile DevOps
- JavaScript典型應用與最佳實踐
- OpenStack Cloud Computing Cookbook
- 工業機器人維護與保養
- Applied Data Visualization with R and ggplot2
- C++程序設計基礎(上)
- Spark大數據商業實戰三部曲:內核解密|商業案例|性能調優
- INSTANT Adobe Story Starter
- 從機器學習到無人駕駛
- Learn T-SQL Querying
- Outlook時間管理秘笈
- Practical Computer Vision