# Markov Chain¶

class pgmpy.models.MarkovChain.MarkovChain(variables=None, card=None, start_state=None)[source]

Class to represent a Markov Chain with multiple kernels for factored state space, along with methods to simulate a run.

Examples

Create an empty Markov Chain:

```>>> from pgmpy.models import MarkovChain as MC
>>> model = MC()
```

And then add variables to it

```>>> model.add_variables_from(['intel', 'diff'], [2, 3])
```

Or directly create a Markov Chain from a list of variables and their cardinalities

```>>> model = MC(['intel', 'diff'], [2, 3])
```

```>>> intel_tm = {0: {0: 0.25, 1: 0.75}, 1: {0: 0.5, 1: 0.5}}
>>> diff_tm = {0: {0: 0.1, 1: 0.5, 2: 0.4}, 1: {0: 0.2, 1: 0.2, 2: 0.6 }, 2: {0: 0.7, 1: 0.15, 2: 0.15}}
```

Set a start state

```>>> from pgmpy.factors.discrete import State
>>> model.set_start_state([State('intel', 0), State('diff', 2)])
```

Sample from it

```>>> model.sample(size=5)
intel  diff
0      0     2
1      1     0
2      0     1
3      1     0
4      0     2
```

Adds a transition model for a particular variable.

Parameters:
• variable (any hashable python object) – must be an existing variable of the model.

• transition_model (dict or 2d array) – dict representing valid transition probabilities defined for every possible state of the variable. array represent a square matrix where every row sums to 1, array[i,j] indicates the transition probalities from State i to State j

Examples

```>>> from pgmpy.models import MarkovChain as MC
>>> model = MC()
>>> grade_tm = {0: {0: 0.1, 1: 0.5, 2: 0.4}, 1: {0: 0.2, 1: 0.2, 2: 0.6 }, 2: {0: 0.7, 1: 0.15, 2: 0.15}}
>>> grade_tm_matrix = np.array([[0.1, 0.5, 0.4], [0.2, 0.2, 0.6], [0.7, 0.15, 0.15]])
```

Add a variable to the model.

Parameters:
• variable (any hashable python object) –

• card (int) – Representing the cardinality of the variable to be added.

Examples

```>>> from pgmpy.models import MarkovChain as MC
>>> model = MC()
```

Add several variables to the model at once.

Parameters:
• variables (array-like iterable object) – List of variables to be added.

• cards (array-like iterable object) – List of cardinalities of the variables to be added.

Examples

```>>> from pgmpy.models import MarkovChain as MC
>>> model = MC()
```
copy()[source]

Returns a copy of Markov Chain Model.

Returns:

MarkovChain

Return type:

Copy of MarkovChain.

Examples

```>>> from pgmpy.models import MarkovChain
>>> from pgmpy.factors.discrete import State
>>> model = MarkovChain()
>>> intel_tm = {0: {0: 0.2, 1: 0.4, 2:0.4}, 1: {0: 0, 1: 0.5, 2: 0.5}, 2: {0: 0.3, 1: 0.3, 2: 0.4}}
>>> diff_tm = {0: {0: 0.5, 1: 0.5}, 1: {0: 0.25, 1:0.75}}
>>> model.set_start_state([State('intel', 0), State('diff', 1)])
>>> model_copy = model.copy()
>>> model_copy.transition_models
>>> {'diff': {0: {0: 0.1, 1: 0.5, 2: 0.4}, 1: {0: 0.2, 1: 0.2, 2: 0.6}, 2: {0: 0.7, 1: 0.15, 2: 0.15}},
...  'intel': {0: {0: 0.25, 1: 0.75}, 1: {0: 0.5, 1: 0.5}}}
```
generate_sample(start_state=None, size=1, seed=None)[source]

Generator version of self.sample

Return type:

List of State namedtuples, representing the assignment to all variables of the model.

Examples

```>>> from pgmpy.models.MarkovChain import MarkovChain
>>> from pgmpy.factors.discrete import State
>>> model = MarkovChain()
>>> intel_tm = {0: {0: 0.2, 1: 0.4, 2:0.4}, 1: {0: 0, 1: 0.5, 2: 0.5}, 2: {0: 0.3, 1: 0.3, 2: 0.4}}
>>> diff_tm = {0: {0: 0.5, 1: 0.5}, 1: {0: 0.25, 1:0.75}}
>>> gen = model.generate_sample([State('intel', 0), State('diff', 0)], 2)
>>> [sample for sample in gen]
[[State(var='intel', state=2), State(var='diff', state=1)],
[State(var='intel', state=2), State(var='diff', state=0)]]
```
is_stationarity(tolerance=0.2, sample=None)[source]

Checks if the given markov chain is stationary and checks the steady state probability values for the state are consistent.

Parameters:
• tolerance (float) – represents the diff between actual steady state value and the computed value

• sample ([State(i,j)]) – represents the list of state which the markov chain has sampled

Returns:

True, if the markov chain converges to steady state distribution within the tolerance False, if the markov chain does not converge to steady state distribution within tolerance

Return type:

Boolean

Examples

```>>> from pgmpy.models.MarkovChain import MarkovChain
>>> from pgmpy.factors.discrete import State
>>> model = MarkovChain()
>>> intel_tm = {0: {0: 0.2, 1: 0.4, 2:0.4}, 1: {0: 0, 1: 0.5, 2: 0.5}, 2: {0: 0.3, 1: 0.3, 2: 0.4}}
>>> diff_tm = {0: {0: 0.5, 1: 0.5}, 1: {0: 0.25, 1:0.75}}
>>> model.is_stationarity()
True
```
prob_from_sample(state, sample=None, window_size=None)[source]

Given an instantiation (partial or complete) of the variables of the model, compute the probability of observing it over multiple windows in a given sample.

If ‘sample’ is not passed as an argument, generate the statistic by sampling from the Markov Chain, starting with a random initial state.

Examples

```>>> from pgmpy.models.MarkovChain import MarkovChain as MC
>>> from pgmpy.factors.discrete import State
>>> model = MC(['intel', 'diff'], [3, 2])
>>> intel_tm = {0: {0: 0.2, 1: 0.4, 2:0.4}, 1: {0: 0, 1: 0.5, 2: 0.5}, 2: {2: 0.5, 1:0.5}}
>>> diff_tm = {0: {0: 0.5, 1: 0.5}, 1: {0: 0.25, 1:0.75}}
>>> model.prob_from_sample([State('diff', 0)])
array([ 0.27,  0.4 ,  0.18,  0.23, ..., 0.29])
```
random_state()[source]

Generates a random state of the Markov Chain.

Return type:

List of namedtuples, representing a random assignment to all variables of the model.

Examples

```>>> from pgmpy.models import MarkovChain as MC
>>> model = MC(['intel', 'diff'], [2, 3])
>>> model.random_state()
[State(var='diff', state=2), State(var='intel', state=1)]
```
sample(start_state=None, size=1, seed=None)[source]

Sample from the Markov Chain.

Parameters:
• start_state (dict or array-like iterable) – Representing the starting states of the variables. If None is passed, a random start_state is chosen.

• size (int) – Number of samples to be generated.

Return type:

pandas.DataFrame

Examples

```>>> from pgmpy.models import MarkovChain as MC
>>> from pgmpy.factors.discrete import State
>>> model = MC(['intel', 'diff'], [2, 3])
>>> model.set_start_state([State('intel', 0), State('diff', 2)])
>>> intel_tm = {0: {0: 0.25, 1: 0.75}, 1: {0: 0.5, 1: 0.5}}
>>> diff_tm = {0: {0: 0.1, 1: 0.5, 2: 0.4}, 1: {0: 0.2, 1: 0.2, 2: 0.6 }, 2: {0: 0.7, 1: 0.15, 2: 0.15}}
>>> model.sample(size=5)
intel  diff
0      0     2
1      1     0
2      0     1
3      1     0
4      0     2
```
set_start_state(start_state)[source]

Set the start state of the Markov Chain. If the start_state is given as an array-like iterable, its contents are reordered in the internal representation.

Parameters:

start_state (dict or array-like iterable object) – Dict (or list) of tuples representing the starting states of the variables.

Examples

```>>> from pgmpy.models import MarkovChain as MC
>>> from pgmpy.factors.discrete import State
>>> model = MC(['a', 'b'], [2, 2])
>>> model.set_start_state([State('a', 0), State('b', 1)])
```