官术网_书友最值得收藏!

Setting up a bandit problem

A straightforward MABP involves encountering a slot machine with n arms (alternatively, a row of n one-armed machines). We have a set amount of money to put in these machines and we want to maximize our payout. We keep track of which machines pay out each time and keep a probability distribution of each machine's payout. 

When we start playing, we don't know which of these arms will pay out more than others; the only way we can find that out is by playing each one and observing long-term how often they pay out. What strategy should we use to decide which arms to pull, when to pull them, and when to prioritize one arm over others? 

For simplicity, let's assume that each time you pull an arm, you get a reward of either $1 or $0 (a bandit with a payout of either 1 or 0 is called a Bernoulli bandit). With a particular 4-armed bandit, we might get the following results:

In the preceding simulation, we start out in exploration mode (meaning that we have a high epsilon value), so we try all four arms at once. We get no reward from arms 1 and 3, but we do get rewards from 2 and 4. We pull 2 again hoping for another reward, but we don't get one. We pull 4 again and get a reward, and now it looks like 4 is a good arm, so we pull it again and get another reward. 

By the end of the trial, we have the following results:

  • Arm 1: 2 pulls, 1 win, and 50% success
  • Arm 2: 3 pulls, 1 win, and 33% success
  • Arm 3: 1 pull, 0 wins, and 0% success
  • Arm 4: 3 pulls, 3 wins, and 100% success

Based on these results, we now have some reason to believe 4 is a good arm to pull and that 2 is not necessarily very good. We have a 0% chance of a reward outcome from 3; however, because we have only pulled it once, we should pull it more to get more information about it before making a decision.

Similarly, we have a 50% chance of a reward outcome from 1, but since we have only pulled it twice, we probably don't have enough information yet to decide whether it is a good arm to pull. We will need to run the game for more rounds in order to be able to make useful predictions about the arms that we don't yet know enough about. 

As we continue to play, we keep track of our results, and eventually, we build up a probability distribution of wins for each arm with enough observations so that we can rely on the results. This becomes the exploitation side of our strategy, and we want to make sure we play arms that we know are likely to win as often as possible, even while we continue to explore the results for other arms. 

主站蜘蛛池模板: 读书| 英德市| 博罗县| 道真| 嵊州市| 阿鲁科尔沁旗| 镇安县| 济南市| 望城县| 德保县| 广饶县| 昌邑市| 无锡市| 赣榆县| 乌兰浩特市| 额济纳旗| 旅游| 荣昌县| 西华县| 桐庐县| 衡山县| 乌拉特后旗| 门头沟区| 阿图什市| 蒙山县| 南京市| 密云县| 金沙县| 普兰店市| 资中县| 塘沽区| 仙游县| 凭祥市| 贵州省| 岳普湖县| 洱源县| 三亚市| 花莲县| 友谊县| 肇庆市| 阿坝县|