Here, we convert the state array to a tensor of the float type because we need to compute the multiplication of the state and weight tensor, torch.matmul(state, weight), for linear mapping. The action with the higher value is selected using the torch.argmax() operation. And don't forget to take the value of the resulting action tensor using .item() because it is a one-element tensor.
Specify the number of episodes:
>>> n_episode = 1000
We need to keep track of the best total reward on the fly, as well as the corresponding weight. So, we specify their starting values:
>>> best_total_reward = 0 >>> best_weight = None
We will also record the total reward for every episode:
>>> total_rewards = []
Now, we can run n_episode. For each episode, we do the following:
Randomly pick the weight
Let the agent take actions according to the linear mapping
An episode terminates and returns the total reward
Update the best total reward and the best weight if necessary
We have obtained the best policy through 1,000 random searches. The best policy is parameterized by best_weight.
Before we test out the best policy in the testing episodes, we can calculate the average total reward achieved by random linear mapping:
>>> print('Average total reward over {} episode: {}'.format( n_episode, sum(total_rewards) / n_episode)) Average total reward over 1000 episode: 47.197
This is more than twice what we got from the random action policy (22.25).
Now, let's see how the learned policy performs on 100 new episodes:
Surprisingly, the average reward for the testing episodes is close to the maximum of 200 steps with the learned policy. Be aware that this value may vary a lot. It could be anywhere from 160 to 200.