官术网_书友最值得收藏!

The Markov chain and Markov process

Before going into MDP, let us understand the Markov chain and Markov process, which form the foundation of MDP.

The Markov property states that the future depends only on the present and not on the past. The Markov chain is a probabilistic model that solely depends on the current state to predict the next state and not the previous states, that is, the future is conditionally independent of the past. The Markov chain strictly follows the Markov property. 

For example, if we know that the current state is cloudy, we can predict that next state could be rainy. We came to this conclusion that the next state could be rainy only by considering the current state (cloudy) and not the past states, which might be sunny, windy, and so on. However, the Markov property does not hold true for all processes. For example, throwing a dice (the next state) has no dependency on the previous number, whatever showed up on the dice (the current state).

Moving from one state to another is called transition and its probability is called a transition probability. We can formulate the transition probabilities in the form of a table, as shown next, and it is called a Markov table. It shows, given the current state, what the probability of moving to the next state is:

 

We can also represent the Markov chain in the form a state diagram that shows the transition probability:

The preceding state diagram shows the probability of moving from one state to another. Still don't understand the Markov chain? Okay, let us talk.

Me: "What are you doing?"

You: "I'm reading about the Markov chain."

Me: "What is your plan after reading?"

You: "I'm going to sleep."

Me: "Are you sure you're going to sleep?"

You: "Probably. I'll watch TV if I'm not sleepy."

Me: "Cool; this is also a Markov chain."

You: "Eh?"

We can formulate our conversation into a Markov chain and draw a state diagram as follows:

The Markov chain lies in the core concept that the future depends only on the present and not on the past. A stochastic process is called a Markov process if it follows the Markov property. 

主站蜘蛛池模板: 浪卡子县| 五家渠市| 松原市| 黄浦区| 虞城县| 太谷县| 富锦市| 天水市| 濮阳市| 宁都县| 贵港市| 红桥区| 黎川县| 大埔区| 崇左市| 龙口市| 福贡县| 天门市| 阿合奇县| 绥宁县| 乌拉特中旗| 岚皋县| 庆安县| 崇左市| 湖南省| 龙游县| 独山县| 松原市| 即墨市| 奉新县| 曲周县| 黄陵县| 临泽县| 阳城县| 抚松县| 济源市| 元阳县| 长宁区| 武汉市| 同心县| 长武县|