VariableElimination#
- class pgmpy.inference.VariableElimination(model)[source]#
Bases:
Inference- induced_graph(elimination_order)[source]#
Returns the induced graph formed by running Variable Elimination on the network.
- Parameters:
- elimination_order: list, array like
List of variables in the order in which they are to be eliminated.
Examples
>>> import numpy as np >>> import pandas as pd >>> from pgmpy.models import DiscreteBayesianNetwork >>> from pgmpy.inference import VariableElimination >>> values = pd.DataFrame( ... np.random.randint(low=0, high=2, size=(1000, 5)), ... columns=["A", "B", "C", "D", "E"], ... ) >>> model = DiscreteBayesianNetwork( ... [("A", "B"), ("C", "B"), ("C", "D"), ("B", "E")] ... ) >>> model.fit(values) <pgmpy.models...DiscreteBayesianNetwork object at 0x...> >>> inference = VariableElimination(model) >>> inference.induced_graph(["C", "D", "A", "B", "E"]) <networkx.classes.graph.Graph object at 0x...>
- induced_width(elimination_order)[source]#
Returns the width (integer) of the induced graph formed by running Variable Elimination on the network. The width is the defined as the number of nodes in the largest clique in the graph minus 1.
- Parameters:
- elimination_order: list, array like
List of variables in the order in which they are to be eliminated.
Examples
>>> import numpy as np >>> import pandas as pd >>> from pgmpy.models import DiscreteBayesianNetwork >>> from pgmpy.inference import VariableElimination >>> values = pd.DataFrame( ... np.random.randint(low=0, high=2, size=(1000, 5)), ... columns=["A", "B", "C", "D", "E"], ... ) >>> model = DiscreteBayesianNetwork( ... [("A", "B"), ("C", "B"), ("C", "D"), ("B", "E")] ... ) >>> model.fit(values) <pgmpy.models...DiscreteBayesianNetwork object at 0x...> >>> inference = VariableElimination(model) >>> inference.induced_width(["C", "D", "A", "B", "E"]) 3
- map_query(variables=None, evidence=None, virtual_evidence=None, elimination_order='MinFill', show_progress=True)[source]#
Computes the MAP Query over the variables given the evidence. Returns the highest probable state in the joint distribution of variables.
- Parameters:
- variables: list
list of variables over which we want to compute the max-marginal.
- evidence: dict
a dict key, value pair as {var: state_of_var_observed} None if no evidence
- virtual_evidence: list (default:None)
A list of pgmpy.factors.discrete.TabularCPD representing the virtual evidences.
- elimination_order: list
order of variable eliminations (if nothing is provided) order is computed automatically
- show_progress: boolean
If True, shows a progress bar.
Examples
>>> from pgmpy.inference import VariableElimination >>> from pgmpy.models import DiscreteBayesianNetwork >>> import numpy as np >>> import pandas as pd >>> values = pd.DataFrame( ... np.random.randint(low=0, high=2, size=(1000, 5)), ... columns=["A", "B", "C", "D", "E"], ... ) >>> model = DiscreteBayesianNetwork( ... [("A", "B"), ("C", "B"), ("C", "D"), ("B", "E")] ... ) >>> model.fit(values) <pgmpy.models...DiscreteBayesianNetwork object at 0x...> >>> inference = VariableElimination(model) >>> phi_query = inference.map_query(["A", "B"])
- max_marginal(variables=None, evidence=None, elimination_order='MinFill', show_progress=True)[source]#
Computes the max-marginal over the variables given the evidence.
- Parameters:
- variables: list
list of variables over which we want to compute the max-marginal.
- evidence: dict
a dict key, value pair as {var: state_of_var_observed} None if no evidence
- elimination_order: list
order of variable eliminations (if nothing is provided) order is computed automatically
Examples
>>> import numpy as np >>> import pandas as pd >>> from pgmpy.models import DiscreteBayesianNetwork >>> from pgmpy.inference import VariableElimination >>> values = pd.DataFrame( ... np.random.randint(low=0, high=2, size=(1000, 5)), ... columns=["A", "B", "C", "D", "E"], ... ) >>> model = DiscreteBayesianNetwork( ... [("A", "B"), ("C", "B"), ("C", "D"), ("B", "E")] ... ) >>> model.fit(values) <pgmpy.models...DiscreteBayesianNetwork object at 0x...> >>> inference = VariableElimination(model) >>> phi_query = inference.max_marginal(["A", "B"])
- query(variables: list[Hashable], evidence: dict[Hashable, int] | None = None, virtual_evidence: list | None = None, elimination_order='greedy', joint=True, show_progress=True)[source]#
- Parameters:
- variables: list
list of variables for which you want to compute the probability
- evidence: dict
a dict key, value pair as {var: state_of_var_observed} None if no evidence
- virtual_evidence: list (default:None)
A list of pgmpy.factors.discrete.TabularCPD representing the virtual evidences.
- elimination_order: str or list (default=’greedy’)
Order in which to eliminate the variables in the algorithm. If list is provided, should contain all variables in the model except the ones in variables. str options are: greedy, WeightedMinFill, MinNeighbors, MinWeight, MinFill. Please refer https://pgmpy.org/exact_infer/ve.html#module-pgmpy.inference.EliminationOrder for details.
- joint: boolean (default: True)
If True, returns a Joint Distribution over variables. If False, returns a dict of distributions over each of the variables.
- show_progress: boolean
If True, shows a progress bar.
Examples
>>> from pgmpy.inference import VariableElimination >>> from pgmpy.models import DiscreteBayesianNetwork >>> import numpy as np >>> import pandas as pd >>> values = pd.DataFrame( ... np.random.randint(low=0, high=2, size=(1000, 5)), ... columns=["A", "B", "C", "D", "E"], ... ) >>> model = DiscreteBayesianNetwork( ... [("A", "B"), ("C", "B"), ("C", "D"), ("B", "E")] ... ) >>> model.fit(values) <pgmpy.models...DiscreteBayesianNetwork object at 0x...> >>> inference = VariableElimination(model) >>> phi_query = inference.query(["A", "B"])