官术网_书友最值得收藏!

Parameterization of Markov chains

In the code for the Markov chain in the previous section, we used a dictionary to parameterize the Markov chain that had the probability values of all the possible state transitions. Another way of representing state transitions is using a transition matrix. The transition matrix, as the name suggests, uses a tabular representation for the transition probabilities. The transition matrix for the example in Figure 1.1 is shown in the following table.

The following table shows the transition matrix for the Markov chain shown in Figure 1.1. The probability values represent the probability of the system going from the state in the row to the states mentioned in the columns:

 

The transition matrix represents the same information as in the dictionary, but in a more compact way. For this reason, the transition matrix is the standard way of representing Markov chains. Let's modify our MarkovChain class so that it can accept a transition matrix:

import numpy as np

class MarkovChain(object):
def __init__(self, transition_matrix, states):
"""
Initialize the MarkovChain instance.

Parameters
----------
transition_matrix: 2-D array
A 2-D array representing the probabilities of change of
state in the Markov Chain.

states: 1-D array
An array representing the states of the Markov Chain. It
needs to be in the same order as transition_matrix.
"""
self.transition_matrix = np.atleast_2d(transition_matrix)
self.states = states
self.index_dict = {self.states[index]: index for index in
range(len(self.states))}
self.state_dict = {index: self.states[index] for index in
range(len(self.states))}

def next_state(self, current_state):
"""
Returns the state of the random variable at the next time
instance.

Parameters
----------
current_state: str
The current state of the system.
"""
return np.random.choice(
self.states,
p=self.transition_matrix[self.index_dict[current_state], :])

def generate_states(self, current_state, no=10):
"""
Generates the next states of the system.

Parameters
----------
current_state: str
The state of the current random variable.

no: int
The number of future states to generate.
"""
future_states = []
for i in range(no):
next_state = self.next_state(current_state)
future_states.append(next_state)
current_state = next_state
return future_states

Running this code should also give similar results to what we got in the previous section. Using a transition matrix might not seem like a good idea because it requires us to create extra variables to store the indices. But, in cases when we have hundreds of states, using a transition matrix is much more efficient than using the simple dictionary implementation. In the case of a transition matrix, we can simply use NumPy indexing to get the probability values in the next_state method, whereas we were looping over all the state names in the previous implementation:

>>> transition_matrix = [[0.8, 0.19, 0.01],
[0.2, 0.7, 0.1],
[0.1, 0.2, 0.7]]
>>> weather_chain = MarkovChain(transition_matrix=transition_matrix,
states=['Sunny', 'Rainy', 'Snowy'])
>>> weather_chain.next_state(current_state='Sunny')
'Sunny'
>>> weather_chain.next_state(current_state='Snowy')
'Sunny'
>>> weather_chain.generate_states(current_state='Snowy', no=10)
['Snowy', 'Rainy', 'Rainy', 'Rainy', 'Rainy', 'Rainy',
'Rainy', 'Rainy', 'Sunny', 'Sunny']

主站蜘蛛池模板: 类乌齐县| 济阳县| 镶黄旗| 历史| 贡山| 南汇区| 行唐县| 秭归县| 陆川县| 乡宁县| 枝江市| 达州市| 黔南| 长宁县| 山丹县| 通渭县| 武清区| 吕梁市| 湖州市| 荥阳市| 遂平县| 冕宁县| 安国市| 杂多县| 夏河县| 新竹市| 黄平县| 黑龙江省| 揭阳市| 夏津县| 喀什市| 察雅县| 额尔古纳市| 丰台区| 彭泽县| 汾阳市| 叶城县| 平舆县| 杭州市| 连云港市| 中牟县|