Belief Propagation with Message Passing¶

class pgmpy.inference.ExactInference.BeliefPropagationWithMessagePassing(model: FactorGraph, check_model=True)[source]

Class for performing efficient inference using Belief Propagation method on factor graphs with no loops.

The message-passing algorithm recursively parses the factor graph to propagate the model’s beliefs to infer the posterior distribution of the queried variable. The recursion stops when reaching an observed variable or a unobserved root/leaf variable.

It does not work for loopy graphs.

Parameters:

model (FactorGraph) – Model on which to run the inference.

References

Algorithm 2.1 in https://www.mbmlbook.com/LearningSkills_Testing_out_the_model.html by J Winn (Microsoft Research).

static calc_factor_node_message(factor, incoming_messages, target_var)[source]

Returns the outgoing message for a factor node, which is the multiplication of the incoming messages with the factor function (CPT).

The variables’ order in the incoming messages list must match the variable’s order in the CPT’s dimensions

Parameters:
• factor (str) – the factor node from which to compute the outgoing message

• incoming_messages (list) – list of messages coming to this factor node

• target_var (str) – the variable node to which the outgoing message is being sent to

calc_variable_node_message(variable, incoming_messages)[source]

The outgoing message is the element wise product of all incoming messages

If there are no incoming messages, returns a uniform message If there is only one incoming message, returns that message Otherwise, returns the product of all incoming messages

Parameters:
• variable (str) – the variable node from which to compute the outgoing message

• incoming_messages (list) – list of messages coming to this variable node

query(variables, evidence=None, virtual_evidence=None, get_messages=False)[source]

Computes the posterior distributions for each of the queried variable, given the evidence, and the virtual_evidence. Optionally also returns the computed messages.

Parameters:
• variables (list) – List of variables for which you want to compute the posterior.

• evidence (dict or None (default: None)) – A dict key, value pair as {var: state_of_var_observed}. None if no evidence.

• virtual_evidence (list or None (default: None)) – A list of pgmpy.factors.discrete.TabularCPD representing the virtual evidences. Each virtual evidence becomes a virtual message that gets added to the list of computed messages incoming to the variable node. None if no virtual evidence.

Returns:

• If get_messages is False, returns a dict of the variables, posterior distributions – pairs: {variable: pgmpy.factors.discrete.DiscreteFactor}.

• If get_messages is True, returns –

1. A dict of the variables, posterior distributions pairs: {variable: pgmpy.factors.discrete.DiscreteFactor} 2. A dict of all messages sent from a factor to a node: {“{pgmpy.factors.discrete.DiscreteFactor.variables} -> variable”: np.array}.

Examples

```>>> from pgmpy.factors.discrete import DiscreteFactor
>>> from pgmpy.models import FactorGraph
>>> from pgmpy.inference import BeliefPropagation
>>> factor_graph = FactorGraph()
>>> phi1 = DiscreteFactor(["A"], [2], [0.4, 0.6])
>>> phi2 = DiscreteFactor(
...     ["B", "A"], [3, 2], [[0.2, 0.05], [0.3, 0.15], [0.5, 0.8]]
... )
>>> phi3 = DiscreteFactor(["C", "B"], [2, 3], [[0.4, 0.5, 0.1], [0.6, 0.5, 0.9]])
>>> phi4 = DiscreteFactor(
...     ["D", "B"], [3, 3], [[0.1, 0.1, 0.2], [0.3, 0.2, 0.1], [0.6, 0.7, 0.7]]
... )
...     [
...         (phi1, "A"),
...         ("A", phi2),
...         (phi2, "B"),
...         ("B", phi3),
...         (phi3, "C"),
...         ("B", phi4),
...         (phi4, "D"),
...     ]
... )
>>> belief_propagation = BeliefPropagation(factor_graph)
>>> belief_propagation.query(variables=['B', 'C'],
...                          evidence={'D': 0},
...                          virtual_evidence=[TabularCPD(['A'], 2, [[0.3], [0.7]])])
```