Belief Propagation¶
- class pgmpy.inference.ExactInference.BeliefPropagation(model)[source]¶
Class for performing inference using Belief Propagation method.
Creates a Junction Tree or Clique Tree (JunctionTree class) for the input probabilistic graphical model and performs calibration of the junction tree so formed using belief propagation.
- Parameters:
model (DiscreteBayesianNetwork, DiscreteMarkovNetwork, FactorGraph, JunctionTree) – model for which inference is to performed
- calibrate()[source]¶
Calibration using belief propagation in junction tree or clique tree.
Examples
>>> from pgmpy.models import DiscreteBayesianNetwork >>> from pgmpy.factors.discrete import TabularCPD >>> from pgmpy.inference import BeliefPropagation >>> G = DiscreteBayesianNetwork( ... [ ... ("diff", "grade"), ... ("intel", "grade"), ... ("intel", "SAT"), ... ("grade", "letter"), ... ] ... ) >>> diff_cpd = TabularCPD("diff", 2, [[0.2], [0.8]]) >>> intel_cpd = TabularCPD("intel", 3, [[0.5], [0.3], [0.2]]) >>> grade_cpd = TabularCPD( ... "grade", ... 3, ... [ ... [0.1, 0.1, 0.1, 0.1, 0.1, 0.1], ... [0.1, 0.1, 0.1, 0.1, 0.1, 0.1], ... [0.8, 0.8, 0.8, 0.8, 0.8, 0.8], ... ], ... evidence=["diff", "intel"], ... evidence_card=[2, 3], ... ) >>> sat_cpd = TabularCPD( ... "SAT", ... 2, ... [[0.1, 0.2, 0.7], [0.9, 0.8, 0.3]], ... evidence=["intel"], ... evidence_card=[3], ... ) >>> letter_cpd = TabularCPD( ... "letter", ... 2, ... [[0.1, 0.4, 0.8], [0.9, 0.6, 0.2]], ... evidence=["grade"], ... evidence_card=[3], ... ) >>> G.add_cpds(diff_cpd, intel_cpd, grade_cpd, sat_cpd, letter_cpd) >>> bp = BeliefPropagation(G) >>> bp.calibrate()
- get_clique_beliefs()[source]¶
Returns clique beliefs. Should be called after the clique tree (or junction tree) is calibrated.
- get_sepset_beliefs()[source]¶
Returns sepset beliefs. Should be called after clique tree (or junction tree) is calibrated.
- map_query(variables=None, evidence=None, virtual_evidence=None, show_progress=True)[source]¶
MAP Query method using belief propagation. Returns the highest probable state in the joint distributon of variables.
- Parameters:
variables (list) – list of variables for which you want to compute the probability
virtual_evidence (list (default:None)) – A list of pgmpy.factors.discrete.TabularCPD representing the virtual evidences.
evidence (dict) – a dict key, value pair as {var: state_of_var_observed} None if no evidence
show_progress (boolean) – If True, shows a progress bar.
Examples
>>> from pgmpy.factors.discrete import TabularCPD >>> from pgmpy.models import DiscreteBayesianNetwork >>> from pgmpy.inference import BeliefPropagation >>> bayesian_model = DiscreteBayesianNetwork( ... [("A", "J"), ("R", "J"), ("J", "Q"), ("J", "L"), ("G", "L")] ... ) >>> cpd_a = TabularCPD("A", 2, [[0.2], [0.8]]) >>> cpd_r = TabularCPD("R", 2, [[0.4], [0.6]]) >>> cpd_j = TabularCPD( ... "J", 2, [[0.9, 0.6, 0.7, 0.1], [0.1, 0.4, 0.3, 0.9]], ["R", "A"], [2, 2] ... ) >>> cpd_q = TabularCPD("Q", 2, [[0.9, 0.2], [0.1, 0.8]], ["J"], [2]) >>> cpd_l = TabularCPD( ... "L", ... 2, ... [[0.9, 0.45, 0.8, 0.1], [0.1, 0.55, 0.2, 0.9]], ... ["G", "J"], ... [2, 2], ... ) >>> cpd_g = TabularCPD("G", 2, [[0.6], [0.4]]) >>> bayesian_model.add_cpds(cpd_a, cpd_r, cpd_j, cpd_q, cpd_l, cpd_g) >>> belief_propagation = BeliefPropagation(bayesian_model) >>> belief_propagation.map_query( ... variables=["J", "Q"], evidence={"A": 0, "R": 0, "G": 0, "L": 1} ... )
- max_calibrate()[source]¶
Max-calibration of the junction tree using belief propagation.
Examples
>>> from pgmpy.models import DiscreteBayesianNetwork >>> from pgmpy.factors.discrete import TabularCPD >>> from pgmpy.inference import BeliefPropagation >>> G = DiscreteBayesianNetwork( ... [ ... ("diff", "grade"), ... ("intel", "grade"), ... ("intel", "SAT"), ... ("grade", "letter"), ... ] ... ) >>> diff_cpd = TabularCPD("diff", 2, [[0.2], [0.8]]) >>> intel_cpd = TabularCPD("intel", 3, [[0.5], [0.3], [0.2]]) >>> grade_cpd = TabularCPD( ... "grade", ... 3, ... [ ... [0.1, 0.1, 0.1, 0.1, 0.1, 0.1], ... [0.1, 0.1, 0.1, 0.1, 0.1, 0.1], ... [0.8, 0.8, 0.8, 0.8, 0.8, 0.8], ... ], ... evidence=["diff", "intel"], ... evidence_card=[2, 3], ... ) >>> sat_cpd = TabularCPD( ... "SAT", ... 2, ... [[0.1, 0.2, 0.7], [0.9, 0.8, 0.3]], ... evidence=["intel"], ... evidence_card=[3], ... ) >>> letter_cpd = TabularCPD( ... "letter", ... 2, ... [[0.1, 0.4, 0.8], [0.9, 0.6, 0.2]], ... evidence=["grade"], ... evidence_card=[3], ... ) >>> G.add_cpds(diff_cpd, intel_cpd, grade_cpd, sat_cpd, letter_cpd) >>> bp = BeliefPropagation(G) >>> bp.max_calibrate()
- query(variables, evidence=None, virtual_evidence=None, joint=True, show_progress=True)[source]¶
Query method using belief propagation.
- Parameters:
variables (list) – list of variables for which you want to compute the probability
evidence (dict) – a dict key, value pair as {var: state_of_var_observed} None if no evidence
virtual_evidence (list (default:None)) – A list of pgmpy.factors.discrete.TabularCPD representing the virtual evidences.
joint (boolean) – If True, returns a Joint Distribution over variables. If False, returns a dict of distributions over each of the variables.
show_progress (boolean) – If True shows a progress bar.
Examples
>>> from pgmpy.factors.discrete import TabularCPD >>> from pgmpy.models import DiscreteBayesianNetwork >>> from pgmpy.inference import BeliefPropagation >>> bayesian_model = DiscreteBayesianNetwork( ... [("A", "J"), ("R", "J"), ("J", "Q"), ("J", "L"), ("G", "L")] ... ) >>> cpd_a = TabularCPD("A", 2, [[0.2], [0.8]]) >>> cpd_r = TabularCPD("R", 2, [[0.4], [0.6]]) >>> cpd_j = TabularCPD( ... "J", 2, [[0.9, 0.6, 0.7, 0.1], [0.1, 0.4, 0.3, 0.9]], ["R", "A"], [2, 2] ... ) >>> cpd_q = TabularCPD("Q", 2, [[0.9, 0.2], [0.1, 0.8]], ["J"], [2]) >>> cpd_l = TabularCPD( ... "L", ... 2, ... [[0.9, 0.45, 0.8, 0.1], [0.1, 0.55, 0.2, 0.9]], ... ["G", "J"], ... [2, 2], ... ) >>> cpd_g = TabularCPD("G", 2, [[0.6], [0.4]]) >>> bayesian_model.add_cpds(cpd_a, cpd_r, cpd_j, cpd_q, cpd_l, cpd_g) >>> belief_propagation = BeliefPropagation(bayesian_model) >>> belief_propagation.query( ... variables=["J", "Q"], evidence={"A": 0, "R": 0, "G": 0, "L": 1} ... )