官术网_书友最值得收藏!

The Markov property

A Markov chain has the following characteristic, called the Markov property:

 

This states, mathematically, that the likelihood distribution of the next state depends only on the current state and not on previous states. Given our knowledge of the current state, St, the probability of reaching St+1 is the same as the probability of reaching St+1, given the knowledge of all the previous states.

To illustrate this further, let's talk about a different stochastic system where the Markov property won't apply. For example, we are working on a job site and have three pieces of equipment that we might be assigned at random over the course of three days. The equipment is given to us without a replacement in the original pool of equipment being assigned. There are two pieces of functioning equipment and one piece that is non-functioning:

 

If we're assigned non-functioning equipment on Day 1, we know for sure that on Day 2 we will be assigned functioning equipment, since we know there are only three potential pieces that we could have been assigned. 

On the other hand, if we come onto the job site starting on Day 2 and are assigned functioning equipment, with no knowledge of what happened on Day 1, we know that we have a 50% probability of getting either functioning or non-functioning equipment on Day 3. If we did have knowledge of what happened on Day 1 (that is, if we received either functioning or non-functioning equipment) we would know for sure what we would receive on Day 3.

Because our knowledge of the probability of each outcome changes with the knowledge that we have of this system, it does not have the Markov property. Knowing information about the past changes our prediction of the future.

You can think of a system having the Markov property as memoryless. Having more information about the past will not change our prediction of the future. If we change the system that we just described to make sure that the equipment that is given to us is replaced, the system will have the Markov property. There are now many outcomes that are available to us that weren't before:

 

In this case, if the only information we have is that we were assigned functioning equipment on Day 2, then on Day 3, we know we have a 50% chance of getting functioning equipment or non-functioning equipment.

Note that this probability calculation does not depend on the specific examples that we've chosen for the preceding chart! Think about flipping a fair coin 100 times; even if you get heads every single time, your odds of getting tails the next time are still 50%, if you're really dealing with a fair coin. Similarly, even if we are assigned non-functioning equipment every single day, our probability of getting functioning equipment the next day will still be 50%.

We can neatly model our new system as follows:

If we are in state F today, we have a 50% chance of staying in state F or moving to state NF, and vice versa. Notice that this is true no matter how much information we include in our probability calculation. Previous events do not affect the probability of future events.

主站蜘蛛池模板: 梅河口市| 肥西县| 博白县| 开阳县| 咸宁市| 布尔津县| 道孚县| 米林县| 田东县| 灵武市| 保亭| 绥滨县| 淳安县| 平原县| 新郑市| 荆州市| 郁南县| 台东县| 浦江县| 沅江市| 九寨沟县| 关岭| 白河县| 大同县| 甘孜| 湖南省| 张家口市| 林西县| 玛纳斯县| 梨树县| 体育| 祁东县| 登封市| 安多县| 荣昌县| 五寨县| 淮滨县| 民和| 巴马| 慈溪市| 渝北区|