Markov Chain

class pgmpy.models.MarkovChain.MarkovChain(variables=None, card=None, start_state=None)[source]

Class to represent a Markov Chain with multiple kernels for factored state space, along with methods to simulate a run.

Examples

Create an empty Markov Chain:

>>> from pgmpy.models import MarkovChain as MC
>>> model = MC()

And then add variables to it

>>> model.add_variables_from(['intel', 'diff'], [2, 3])

Or directly create a Markov Chain from a list of variables and their cardinalities

>>> model = MC(['intel', 'diff'], [2, 3])

Add transition models

>>> intel_tm = {0: {0: 0.25, 1: 0.75}, 1: {0: 0.5, 1: 0.5}}
>>> model.add_transition_model('intel', intel_tm)
>>> diff_tm = {0: {0: 0.1, 1: 0.5, 2: 0.4}, 1: {0: 0.2, 1: 0.2, 2: 0.6 }, 2: {0: 0.7, 1: 0.15, 2: 0.15}}
>>> model.add_transition_model('diff', diff_tm)

Set a start state

>>> from pgmpy.factors.discrete import State
>>> model.set_start_state([State('intel', 0), State('diff', 2)])

Sample from it

>>> model.sample(size=5)
   intel  diff
0      0     2
1      1     0
2      0     1
3      1     0
4      0     2
add_transition_model(variable, transition_model)[source]

Adds a transition model for a particular variable.

Parameters:
  • variable (any hashable python object) – must be an existing variable of the model.

  • transition_model (dict or 2d array) – dict representing valid transition probabilities defined for every possible state of the variable. array represent a square matrix where every row sums to 1, array[i,j] indicates the transition probalities from State i to State j

Examples

>>> from pgmpy.models import MarkovChain as MC
>>> model = MC()
>>> model.add_variable('grade', 3)
>>> grade_tm = {0: {0: 0.1, 1: 0.5, 2: 0.4}, 1: {0: 0.2, 1: 0.2, 2: 0.6 }, 2: {0: 0.7, 1: 0.15, 2: 0.15}}
>>> grade_tm_matrix = np.array([[0.1, 0.5, 0.4], [0.2, 0.2, 0.6], [0.7, 0.15, 0.15]])
>>> model.add_transition_model('grade', grade_tm)
>>> model.add_transition_model('grade', grade_tm_matrix)
add_variable(variable, card=0)[source]

Add a variable to the model.

Parameters:
  • variable (any hashable python object)

  • card (int) – Representing the cardinality of the variable to be added.

Examples

>>> from pgmpy.models import MarkovChain as MC
>>> model = MC()
>>> model.add_variable('x', 4)
add_variables_from(variables, cards)[source]

Add several variables to the model at once.

Parameters:
  • variables (array-like iterable object) – List of variables to be added.

  • cards (array-like iterable object) – List of cardinalities of the variables to be added.

Examples

>>> from pgmpy.models import MarkovChain as MC
>>> model = MC()
>>> model.add_variables_from(['x', 'y'], [3, 4])
copy()[source]

Returns a copy of Markov Chain Model.

Returns:

MarkovChain

Return type:

Copy of MarkovChain.

Examples

>>> from pgmpy.models import MarkovChain
>>> from pgmpy.factors.discrete import State
>>> model = MarkovChain()
>>> model.add_variables_from(['intel', 'diff'], [3, 2])
>>> intel_tm = {0: {0: 0.2, 1: 0.4, 2:0.4}, 1: {0: 0, 1: 0.5, 2: 0.5}, 2: {0: 0.3, 1: 0.3, 2: 0.4}}
>>> model.add_transition_model('intel', intel_tm)
>>> diff_tm = {0: {0: 0.5, 1: 0.5}, 1: {0: 0.25, 1:0.75}}
>>> model.add_transition_model('diff', diff_tm)
>>> model.set_start_state([State('intel', 0), State('diff', 1)])
>>> model_copy = model.copy()
>>> model_copy.transition_models
>>> {'diff': {0: {0: 0.1, 1: 0.5, 2: 0.4}, 1: {0: 0.2, 1: 0.2, 2: 0.6}, 2: {0: 0.7, 1: 0.15, 2: 0.15}},
...  'intel': {0: {0: 0.25, 1: 0.75}, 1: {0: 0.5, 1: 0.5}}}
generate_sample(start_state=None, size=1, seed=None)[source]

Generator version of self.sample

Return type:

List of State namedtuples, representing the assignment to all variables of the model.

Examples

>>> from pgmpy.models.MarkovChain import MarkovChain
>>> from pgmpy.factors.discrete import State
>>> model = MarkovChain()
>>> model.add_variables_from(['intel', 'diff'], [3, 2])
>>> intel_tm = {0: {0: 0.2, 1: 0.4, 2:0.4}, 1: {0: 0, 1: 0.5, 2: 0.5}, 2: {0: 0.3, 1: 0.3, 2: 0.4}}
>>> model.add_transition_model('intel', intel_tm)
>>> diff_tm = {0: {0: 0.5, 1: 0.5}, 1: {0: 0.25, 1:0.75}}
>>> model.add_transition_model('diff', diff_tm)
>>> gen = model.generate_sample([State('intel', 0), State('diff', 0)], 2)
>>> [sample for sample in gen]
[[State(var='intel', state=2), State(var='diff', state=1)],
 [State(var='intel', state=2), State(var='diff', state=0)]]
is_stationarity(tolerance=0.2, sample=None)[source]

Checks if the given markov chain is stationary and checks the steady state probability values for the state are consistent.

Parameters:
  • tolerance (float) – represents the diff between actual steady state value and the computed value

  • sample ([State(i,j)]) – represents the list of state which the markov chain has sampled

Returns:

True, if the markov chain converges to steady state distribution within the tolerance False, if the markov chain does not converge to steady state distribution within tolerance

Return type:

Boolean

Examples

>>> from pgmpy.models.MarkovChain import MarkovChain
>>> from pgmpy.factors.discrete import State
>>> model = MarkovChain()
>>> model.add_variables_from(['intel', 'diff'], [3, 2])
>>> intel_tm = {0: {0: 0.2, 1: 0.4, 2:0.4}, 1: {0: 0, 1: 0.5, 2: 0.5}, 2: {0: 0.3, 1: 0.3, 2: 0.4}}
>>> model.add_transition_model('intel', intel_tm)
>>> diff_tm = {0: {0: 0.5, 1: 0.5}, 1: {0: 0.25, 1:0.75}}
>>> model.add_transition_model('diff', diff_tm)
>>> model.is_stationarity()
True
prob_from_sample(state, sample=None, window_size=None)[source]

Given an instantiation (partial or complete) of the variables of the model, compute the probability of observing it over multiple windows in a given sample.

If ‘sample’ is not passed as an argument, generate the statistic by sampling from the Markov Chain, starting with a random initial state.

Examples

>>> from pgmpy.models.MarkovChain import MarkovChain as MC
>>> from pgmpy.factors.discrete import State
>>> model = MC(['intel', 'diff'], [3, 2])
>>> intel_tm = {0: {0: 0.2, 1: 0.4, 2:0.4}, 1: {0: 0, 1: 0.5, 2: 0.5}, 2: {2: 0.5, 1:0.5}}
>>> model.add_transition_model('intel', intel_tm)
>>> diff_tm = {0: {0: 0.5, 1: 0.5}, 1: {0: 0.25, 1:0.75}}
>>> model.add_transition_model('diff', diff_tm)
>>> model.prob_from_sample([State('diff', 0)])
array([ 0.27,  0.4 ,  0.18,  0.23, ..., 0.29])
random_state()[source]

Generates a random state of the Markov Chain.

Return type:

List of namedtuples, representing a random assignment to all variables of the model.

Examples

>>> from pgmpy.models import MarkovChain as MC
>>> model = MC(['intel', 'diff'], [2, 3])
>>> model.random_state()
[State(var='diff', state=2), State(var='intel', state=1)]
sample(start_state=None, size=1, seed=None)[source]

Sample from the Markov Chain.

Parameters:
  • start_state (dict or array-like iterable) – Representing the starting states of the variables. If None is passed, a random start_state is chosen.

  • size (int) – Number of samples to be generated.

Return type:

pandas.DataFrame

Examples

>>> from pgmpy.models import MarkovChain as MC
>>> from pgmpy.factors.discrete import State
>>> model = MC(['intel', 'diff'], [2, 3])
>>> model.set_start_state([State('intel', 0), State('diff', 2)])
>>> intel_tm = {0: {0: 0.25, 1: 0.75}, 1: {0: 0.5, 1: 0.5}}
>>> model.add_transition_model('intel', intel_tm)
>>> diff_tm = {0: {0: 0.1, 1: 0.5, 2: 0.4}, 1: {0: 0.2, 1: 0.2, 2: 0.6 }, 2: {0: 0.7, 1: 0.15, 2: 0.15}}
>>> model.add_transition_model('diff', diff_tm)
>>> model.sample(size=5)
   intel  diff
0      0     2
1      1     0
2      0     1
3      1     0
4      0     2
set_start_state(start_state)[source]

Set the start state of the Markov Chain. If the start_state is given as an array-like iterable, its contents are reordered in the internal representation.

Parameters:

start_state (dict or array-like iterable object) – Dict (or list) of tuples representing the starting states of the variables.

Examples

>>> from pgmpy.models import MarkovChain as MC
>>> from pgmpy.factors.discrete import State
>>> model = MC(['a', 'b'], [2, 2])
>>> model.set_start_state([State('a', 0), State('b', 1)])