官术网_书友最值得收藏!

Setting up a bandit problem

A straightforward MABP involves encountering a slot machine with n arms (alternatively, a row of n one-armed machines). We have a set amount of money to put in these machines and we want to maximize our payout. We keep track of which machines pay out each time and keep a probability distribution of each machine's payout. 

When we start playing, we don't know which of these arms will pay out more than others; the only way we can find that out is by playing each one and observing long-term how often they pay out. What strategy should we use to decide which arms to pull, when to pull them, and when to prioritize one arm over others? 

For simplicity, let's assume that each time you pull an arm, you get a reward of either $1 or $0 (a bandit with a payout of either 1 or 0 is called a Bernoulli bandit). With a particular 4-armed bandit, we might get the following results:

In the preceding simulation, we start out in exploration mode (meaning that we have a high epsilon value), so we try all four arms at once. We get no reward from arms 1 and 3, but we do get rewards from 2 and 4. We pull 2 again hoping for another reward, but we don't get one. We pull 4 again and get a reward, and now it looks like 4 is a good arm, so we pull it again and get another reward. 

By the end of the trial, we have the following results:

  • Arm 1: 2 pulls, 1 win, and 50% success
  • Arm 2: 3 pulls, 1 win, and 33% success
  • Arm 3: 1 pull, 0 wins, and 0% success
  • Arm 4: 3 pulls, 3 wins, and 100% success

Based on these results, we now have some reason to believe 4 is a good arm to pull and that 2 is not necessarily very good. We have a 0% chance of a reward outcome from 3; however, because we have only pulled it once, we should pull it more to get more information about it before making a decision.

Similarly, we have a 50% chance of a reward outcome from 1, but since we have only pulled it twice, we probably don't have enough information yet to decide whether it is a good arm to pull. We will need to run the game for more rounds in order to be able to make useful predictions about the arms that we don't yet know enough about. 

As we continue to play, we keep track of our results, and eventually, we build up a probability distribution of wins for each arm with enough observations so that we can rely on the results. This becomes the exploitation side of our strategy, and we want to make sure we play arms that we know are likely to win as often as possible, even while we continue to explore the results for other arms. 

主站蜘蛛池模板: 易门县| 平罗县| 富蕴县| 苗栗市| 沁源县| 浦东新区| 托里县| 陕西省| 松阳县| 五指山市| 宁安市| 桂平市| 大竹县| 贡觉县| 东方市| 梅河口市| 南漳县| 西乌珠穆沁旗| 台东市| 新宾| 漳平市| 曲阳县| 肇源县| 天祝| 黔江区| 长阳| 文成县| 香河县| 顺平县| 胶南市| 偃师市| 牙克石市| 温泉县| 镇平县| 鄂伦春自治旗| 清远市| 西乌珠穆沁旗| 屯门区| 绥芬河市| 台前县| 苏尼特左旗|