<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://debianws.lexgopc.com/wiki143/index.php?action=history&amp;feed=atom&amp;title=Junction_tree_algorithm</id>
	<title>Junction tree algorithm - Revision history</title>
	<link rel="self" type="application/atom+xml" href="http://debianws.lexgopc.com/wiki143/index.php?action=history&amp;feed=atom&amp;title=Junction_tree_algorithm"/>
	<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Junction_tree_algorithm&amp;action=history"/>
	<updated>2026-05-01T00:40:43Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>http://debianws.lexgopc.com/wiki143/index.php?title=Junction_tree_algorithm&amp;diff=2767306&amp;oldid=prev</id>
		<title>imported&gt;Citation bot: Add: isbn, pages. | Use this bot. Report bugs. | Suggested by Dominic3203 | Category:Graph algorithms | #UCB_Category 17/132</title>
		<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Junction_tree_algorithm&amp;diff=2767306&amp;oldid=prev"/>
		<updated>2024-10-25T14:22:38Z</updated>

		<summary type="html">&lt;p&gt;Add: isbn, pages. | &lt;a href=&quot;/wiki143/index.php?title=En:WP:UCB&amp;amp;action=edit&amp;amp;redlink=1&quot; class=&quot;new&quot; title=&quot;En:WP:UCB (page does not exist)&quot;&gt;Use this bot&lt;/a&gt;. &lt;a href=&quot;/wiki143/index.php?title=En:WP:DBUG&amp;amp;action=edit&amp;amp;redlink=1&quot; class=&quot;new&quot; title=&quot;En:WP:DBUG (page does not exist)&quot;&gt;Report bugs&lt;/a&gt;. | Suggested by Dominic3203 | &lt;a href=&quot;/wiki143/index.php?title=Category:Graph_algorithms&quot; title=&quot;Category:Graph algorithms&quot;&gt;Category:Graph algorithms&lt;/a&gt; | #UCB_Category 17/132&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;[[File:Junction-tree-example.gif|300px|thumb|Example of a junction tree]]&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;junction tree algorithm&amp;#039;&amp;#039;&amp;#039; (also known as &amp;#039;Clique Tree&amp;#039;)  is a method used in [[machine learning]] to extract [[marginal distribution|marginalization]] in general [[Graph (discrete mathematics)|graph]]s.  In essence, it entails performing [[belief propagation]] on a modified graph called a [[junction tree]]. The graph is called a tree because it branches into different sections of data; [[Vertex (graph theory)|nodes]] of variables are the branches.&amp;lt;ref name=&amp;quot;:1&amp;quot;&amp;gt;{{Cite web|url=https://ai.stanford.edu/~paskin/gm-short-course/lec3.pdf|title=A Short Course on Graphical Models|last=Paskin|first=Mark|website=Stanford}}&amp;lt;/ref&amp;gt; The basic premise is to eliminate [[cycle (graph theory)|cycle]]s by clustering them into single nodes. Multiple extensive classes of queries can be compiled at the same time into larger structures of data.&amp;lt;ref name=&amp;quot;:1&amp;quot; /&amp;gt; There are different [[algorithm]]s to meet specific needs and for what needs to be calculated. [[Inference network|Inference algorithms]] gather new developments in the data and calculate it based on the new information provided.&amp;lt;ref&amp;gt;{{Cite web|url=http://www.dfki.de/~neumann/publications/diss/node58.html|title=The Inference Algorithm|website=www.dfki.de|access-date=2018-10-25}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Junction tree algorithm==&lt;br /&gt;
&lt;br /&gt;
===Hugin algorithm===&lt;br /&gt;
&lt;br /&gt;
* If the graph is directed then [[Moral graph|moralize]] it to make it un-directed.&lt;br /&gt;
*Introduce the evidence.&lt;br /&gt;
*[[Chordal graph|Triangulate]] the graph to make it chordal.&lt;br /&gt;
*Construct a [[junction tree]] from the triangulated graph (we will call the vertices of the junction tree &amp;quot;[[Supernode (circuit)|supernodes]]&amp;quot;).&lt;br /&gt;
*Propagate the probabilities along the junction tree (via [[belief propagation]])&lt;br /&gt;
&lt;br /&gt;
Note that this last step is inefficient for graphs of large [[treewidth]]. Computing the messages to pass between supernodes involves doing exact marginalization over the variables in both supernodes. Performing this algorithm for a graph with treewidth k will thus have at least one computation which takes time exponential in k. It is a [[belief propagation|message passing]] algorithm.&amp;lt;ref name=&amp;quot;:2&amp;quot;&amp;gt;{{Cite web|url=http://www.gatsby.ucl.ac.uk/teaching/courses/ml1-2007/lect5-handout.pdf|title=Recap on Graphical Models}}&amp;lt;/ref&amp;gt; The Hugin algorithm takes fewer [[computation]]s to find a solution compared to Shafer-Shenoy.&lt;br /&gt;
&lt;br /&gt;
===Shafer-Shenoy algorithm===&lt;br /&gt;
&lt;br /&gt;
* Computed recursively&amp;lt;ref name=&amp;quot;:2&amp;quot; /&amp;gt;&lt;br /&gt;
* Multiple recursions of the Shafer-Shenoy algorithm results in Hugin algorithm&amp;lt;ref name=&amp;quot;:3&amp;quot;&amp;gt;{{Cite web|url=https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-438-algorithms-for-inference-fall-2014/lecture-notes/MIT6_438F14_Lec14.pdf|title=Algorithms|date=2014|website=Massachusetts Institute of Technology}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Found by the [[Message passing in computer clusters|message passing]] equation&amp;lt;ref name=&amp;quot;:3&amp;quot; /&amp;gt;&lt;br /&gt;
*[[Vertex separator|Separator]] potentials are not stored&amp;lt;ref&amp;gt;{{Cite web|url=https://cs.nyu.edu/~roweis/csc412-2004/notes/lec20x.pdf|title=Hugin Inference Algorithm|last=Roweis|first=Sam|date=2004|website=NYU}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Shafer-Shenoy [[algorithm]] is the [[Sum-product algorithm|sum product]] of a junction tree.&amp;lt;ref&amp;gt;{{Cite web|url=https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-438-algorithms-for-inference-fall-2014/recitations/MIT6_438F14_rec8.pdf|title=Algorithms for Inference|date=2014|website=Massachusetts Institute of Technology}}&amp;lt;/ref&amp;gt; It is used because it runs programs and queries more efficiently than the Hugin algorithm. The algorithm makes calculations for conditionals for [[belief functions]] possible.&amp;lt;ref name=&amp;quot;:02&amp;quot;&amp;gt;{{cite arXiv|last=Kłopotek|first=Mieczysław A.|date=2018-06-06|title=Dempsterian-Shaferian Belief Network From Data|eprint=1806.02373|class=cs.AI}}&amp;lt;/ref&amp;gt; [[Joint distributions]] are needed to make local computations happen.&amp;lt;ref name=&amp;quot;:02&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Underlying theory===&lt;br /&gt;
[[File:Hmm temporal bayesian net.svg|thumb|Example of a Dynamic Bayesian network]]&lt;br /&gt;
The first step concerns only [[Bayesian networks]], and is a procedure to turn a directed graph into an [[undirected]] one. We do this because it allows for the universal applicability of the algorithm, regardless of direction.&lt;br /&gt;
&lt;br /&gt;
The second step is setting variables to their observed value. This is usually needed when we want to calculate conditional probabilities, so we fix the value of the [[random variable]]s we condition on. Those variables are also said to be clamped to their particular value.&lt;br /&gt;
[[File:Chordal-graph.svg|thumb|Example of a chordal graph]]&lt;br /&gt;
The third step is to ensure that graphs are made [[Chordal graph|chordal]] if they aren&amp;#039;t already chordal. This is the first essential step of the algorithm. It makes use of the following theorem:&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;{{cite web |url=https://people.eecs.berkeley.edu/~wainwrig/Talks/Wainwright_PartI.pdf |title= Graphical models, message-passing algorithms, and variational methods: Part I|last= Wainwright|first=Martin |date= 31 March 2008|website=Berkeley EECS |access-date=16 November 2016}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Theorem:&amp;#039;&amp;#039;&amp;#039; For an [[Graph (discrete mathematics)|undirected]] graph, G, the following properties are equivalent:&lt;br /&gt;
&lt;br /&gt;
* Graph G is triangulated.&lt;br /&gt;
* The clique graph of G has a junction tree.&lt;br /&gt;
* There is an elimination ordering for G that does not lead to any added edges.&lt;br /&gt;
&lt;br /&gt;
Thus, by [[Triangulation (geometry)|triangulating]] a graph, we make sure that the corresponding junction tree exists. A usual way to do this, is to decide an elimination order for its nodes, and then run the [[Variable elimination]] algorithm. The [[variable elimination]] algorithm states that the algorithm must be run each time there is a different query.&amp;lt;ref name=&amp;quot;:1&amp;quot; /&amp;gt; This will result to adding more edges to the initial graph, in such a way that the output will be a [[chordal graph]].&lt;br /&gt;
All chordal graphs have a junction tree.&amp;lt;ref name=&amp;quot;:3&amp;quot; /&amp;gt; The next step is to construct the [[junction tree]]. To do so, we use the graph from the previous step, and form its corresponding [[clique graph]].&amp;lt;ref&amp;gt;{{cite web |url=http://mathworld.wolfram.com/CliqueGraph.html |title=Clique Graph|access-date=16 November 2016}}&amp;lt;/ref&amp;gt; Now the next theorem gives us a way to find a junction tree:&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Theorem:&amp;#039;&amp;#039;&amp;#039; Given a triangulated graph, weight the edges of the clique graph by their cardinality, |A∩B|, of the intersection of the adjacent cliques A and B. Then any maximum-weight spanning tree of the clique graph is a junction tree.&lt;br /&gt;
&lt;br /&gt;
So, to construct a junction tree we just have to extract a maximum weight spanning tree out of the clique graph. This can be efficiently done by, for example, modifying [[Kruskal&amp;#039;s algorithm]].&lt;br /&gt;
The last step is to apply [[belief propagation]] to the obtained junction tree.&amp;lt;ref&amp;gt;{{cite web |url=https://www.cs.helsinki.fi/u/bmmalone/probabilistic-models-spring-2014/JunctionTreeBarber.pdf |title= Probabilistic Modelling and Reasoning, The Junction Tree Algorithm |last= Barber|first=David |date= 28 January 2014|website= University of Helsinki |access-date=16 November 2016}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Usage:&amp;#039;&amp;#039;&amp;#039; A junction tree graph is used to visualize the probabilities of the problem. The tree can become a binary tree to form the actual building of the tree.&amp;lt;ref&amp;gt;{{cite conference&lt;br /&gt;
 | last1 = Ramirez | first1 = Julio C.&lt;br /&gt;
 | last2 = Munoz | first2 = Guillermina&lt;br /&gt;
 | last3 = Gutierrez | first3 = Ludivina&lt;br /&gt;
 | contribution = Fault Diagnosis in an Industrial Process Using Bayesian Networks: Application of the Junction Tree Algorithm&lt;br /&gt;
 | date = September 2009&lt;br /&gt;
 | doi = 10.1109/cerma.2009.28&lt;br /&gt;
 | publisher = IEEE&lt;br /&gt;
 | title = 2009 Electronics, Robotics and Automotive Mechanics Conference (CERMA)| pages = 301–306&lt;br /&gt;
 | isbn = 978-0-7695-3799-3&lt;br /&gt;
 }}&amp;lt;/ref&amp;gt; A specific use could be found in [[Autoencoder|auto encoders]], which combine the graph and a passing network on a large scale automatically.&amp;lt;ref&amp;gt;{{Cite journal|last=Jin|first=Wengong|date=Feb 2018|title=Junction Tree Variational Autoencoder for Molecular Graph Generation|journal=Cornell University|arxiv=1802.04364|bibcode=2018arXiv180204364J}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Inference Algorithms ===&lt;br /&gt;
[[File:Cutset-4.svg|thumb|Cutset conditioning]]&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Loopy belief propagation:&amp;#039;&amp;#039;&amp;#039; A different method of interpreting complex graphs. The [[loopy belief propagation]] is used when an approximate solution is needed instead of the [[Exact solutions|exact solution]].&amp;lt;ref&amp;gt;{{Cite book|title=CERMA 2009 : proceedings : 2009 Electronics, Robotics and Automotive Mechanics Conference : 22-25 September 2009 : Cuernavaca, Morelos, Mexico|date=2009|publisher=IEEE Computer Society|others=Institute of Electrical and Electronics Engineers.|isbn=9780769537993|location=Los Alamitos, Calif.|oclc=613519385}}&amp;lt;/ref&amp;gt; It is an [[approximate inference]].&amp;lt;ref name=&amp;quot;:2&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Cutset conditioning:&amp;#039;&amp;#039;&amp;#039; Used with smaller sets of variables. [[Cutset]] conditioning allows for simpler graphs that are easier to read but are not [[Exact solutions|exact]].&amp;lt;ref name=&amp;quot;:2&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
{{reflist}}&lt;br /&gt;
&lt;br /&gt;
==Further reading==&lt;br /&gt;
* {{cite journal&lt;br /&gt;
    | last = Lauritzen&lt;br /&gt;
    | first = Steffen L.&lt;br /&gt;
    |author2=Spiegelhalter, David J. |author-link2=David Spiegelhalter &lt;br /&gt;
    | title = Local Computations with Probabilities on Graphical Structures and their Application to Expert Systems&lt;br /&gt;
    | journal = Journal of the Royal Statistical Society. Series B (Methodological)&lt;br /&gt;
    | volume = 50 |issue=2&lt;br /&gt;
    | pages = 157–224&lt;br /&gt;
    | date = 1988&lt;br /&gt;
    | mr = 0964177&lt;br /&gt;
    | jstor = 2345762 | doi = 10.1111/j.2517-6161.1988.tb01721.x&lt;br /&gt;
 }}&lt;br /&gt;
* {{cite journal&lt;br /&gt;
    | last = Dawid&lt;br /&gt;
    | first = A. P.&lt;br /&gt;
    | author-link = Philip Dawid&lt;br /&gt;
    | title = Applications of a general propagation algorithm for probabilistic expert systems&lt;br /&gt;
    | journal = Statistics and Computing&lt;br /&gt;
    | volume = 2&lt;br /&gt;
    | issue = 1&lt;br /&gt;
    | pages = 25–26&lt;br /&gt;
    | date = 1992&lt;br /&gt;
    | doi = 10.1007/BF01890546| s2cid = 61247712&lt;br /&gt;
 }}&lt;br /&gt;
* {{cite journal&lt;br /&gt;
    | last = Huang&lt;br /&gt;
    | first = Cecil&lt;br /&gt;
    |author2=Darwiche, Adnan&lt;br /&gt;
     | title = Inference in Belief Networks: A Procedural Guide&lt;br /&gt;
    | journal = International Journal of Approximate Reasoning&lt;br /&gt;
    | volume = 15&lt;br /&gt;
    | issue = 3&lt;br /&gt;
    | pages = 225–263&lt;br /&gt;
    | date = 1996&lt;br /&gt;
    | url = http://citeseer.ist.psu.edu/huang94inference.html&lt;br /&gt;
    | doi = 10.1016/S0888-613X(96)00069-2| citeseerx = 10.1.1.47.3279&lt;br /&gt;
 }}&lt;br /&gt;
*Lepar, V., Shenoy, P. (1998). &amp;quot;A Comparison of Lauritzen-Spiegelhalter, Hugin, and Shenoy-Shafer Architectures for Computing Marginals of Probability Distributions.&amp;quot; https://arxiv.org/ftp/arxiv/papers/1301/1301.7394.pdf&lt;br /&gt;
&lt;br /&gt;
{{DEFAULTSORT:Junction Tree Algorithm}}&lt;br /&gt;
[[Category:Bayesian networks]]&lt;br /&gt;
[[Category:Graph algorithms]]&lt;/div&gt;</summary>
		<author><name>imported&gt;Citation bot</name></author>
	</entry>
</feed>