<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://debianws.lexgopc.com/wiki143/index.php?action=history&amp;feed=atom&amp;title=Approximation_algorithm</id>
	<title>Approximation algorithm - Revision history</title>
	<link rel="self" type="application/atom+xml" href="http://debianws.lexgopc.com/wiki143/index.php?action=history&amp;feed=atom&amp;title=Approximation_algorithm"/>
	<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Approximation_algorithm&amp;action=history"/>
	<updated>2026-04-22T16:43:32Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>http://debianws.lexgopc.com/wiki143/index.php?title=Approximation_algorithm&amp;diff=5037291&amp;oldid=prev</id>
		<title>imported&gt;Cmuravi: Added TSP approximation as example of multiplicative approximation</title>
		<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Approximation_algorithm&amp;diff=5037291&amp;oldid=prev"/>
		<updated>2025-12-18T09:26:09Z</updated>

		<summary type="html">&lt;p&gt;Added TSP approximation as example of multiplicative approximation&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Previous revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 09:26, 18 December 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l1&quot;&gt;Line 1:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 1:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{{Short description|Class of algorithms that find approximate solutions to optimization problems}}&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{{Short description|Class of algorithms that find approximate solutions to optimization problems}}&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;In [[computer science]] and [[operations research]], &#039;&#039;&#039;approximation algorithms&#039;&#039;&#039; are [[Time complexity#Polynomial time|efficient]] [[algorithm]]s that find [[approximation|approximate]] solutions to [[optimization problem]]s (in particular [[NP-hardness|NP-hard]] problems) with provable guarantees on the distance of the returned solution to the optimal one.&amp;lt;ref name=&quot;Bernard. 2011&quot;&amp;gt;{{Cite book|title=The design of approximation algorithms|last=Bernard.|first=Shmoys, David|date=2011|publisher=Cambridge University Press|isbn=9780521195270|oclc=671709856|url=http://www.designofapproxalgs.com/}}&amp;lt;/ref&amp;gt; Approximation algorithms naturally arise in the field of [[theoretical computer science]] as a consequence of the widely believed [[P versus NP problem|P ≠ NP]] conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in [[Time complexity|polynomial time]]. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are also many approximation algorithms that provide an additive guarantee on the quality of the returned solution. A notable example of an approximation algorithm that provides &#039;&#039;both&#039;&#039; is the classic approximation algorithm of [[Jan Karel Lenstra|Lenstra]], [[David Shmoys|Shmoys]] and [[Éva Tardos|Tardos]]&amp;lt;ref&amp;gt;{{Cite journal|last1=Lenstra|first1=Jan Karel|last2=Shmoys|first2=David B.|last3=Tardos|first3=Éva|date=1990-01-01|title=Approximation algorithms for scheduling unrelated parallel machines|journal=Mathematical Programming|language=en|volume=46|issue=1–3|pages=259–271|doi=10.1007/BF01585745|issn=0025-5610|citeseerx=10.1.1.115.708|s2cid=206799898}}&amp;lt;/ref&amp;gt; for [[Scheduling (computing)|scheduling]] on unrelated parallel machines.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;In [[computer science]] and [[operations research]], &#039;&#039;&#039;approximation algorithms&#039;&#039;&#039; are [[Time complexity#Polynomial time|efficient]] [[algorithm]]s that find [[approximation|approximate]] solutions to [[optimization problem]]s (in particular [[NP-hardness|NP-hard]] problems) with provable guarantees on the distance of the returned solution to the optimal one.&amp;lt;ref name=&quot;Bernard. 2011&quot;&amp;gt;{{Cite book|title=The design of approximation algorithms|last=Bernard.|first=Shmoys, David|date=2011|publisher=Cambridge University Press|isbn=9780521195270|oclc=671709856|url=http://www.designofapproxalgs.com/}}&amp;lt;/ref&amp;gt; Approximation algorithms naturally arise in the field of [[theoretical computer science]] as a consequence of the widely believed [[P versus NP problem|P ≠ NP]] conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in [[Time complexity|polynomial time]]. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are also many approximation algorithms that provide an additive guarantee on the quality of the returned solution&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;. An example of an approximation algorithm with a multiplicative guarantee is the [[Christofides-Serdyukov algorithm]] for the [[Travelling salesman problem]]. It provides a travelling salesman tour in a metric of length at most 3/2 times that of a shortest such tour. A classic example of approximation algorithm providing an additive guarantee is the constructive proof of [[Vizing&#039;s theorem|Vizing’s theorem]]. It shows how to color the edges of an undirected graph with at most &amp;lt;math&amp;gt; \Delta + 1&amp;lt;/math&amp;gt; colors, where &amp;lt;math&amp;gt; \Delta &amp;lt;/math&amp;gt; is the maximum degree of any node. Since every edge incident on a maximum-degree node must have a different color, the constructive proof of the theorem gives a polynomial-time algorithm that uses at most one additional color than the minimum needed&lt;/ins&gt;. A notable example of an approximation algorithm that provides &#039;&#039;both&#039;&#039; is the classic approximation algorithm of [[Jan Karel Lenstra|Lenstra]], [[David Shmoys|Shmoys]] and [[Éva Tardos|Tardos]]&amp;lt;ref&amp;gt;{{Cite journal|last1=Lenstra|first1=Jan Karel|last2=Shmoys|first2=David B.|last3=Tardos|first3=Éva|date=1990-01-01|title=Approximation algorithms for scheduling unrelated parallel machines|journal=Mathematical Programming|language=en|volume=46|issue=1–3|pages=259–271|doi=10.1007/BF01585745|issn=0025-5610|citeseerx=10.1.1.115.708|s2cid=206799898}}&amp;lt;/ref&amp;gt; for [[Scheduling (computing)|scheduling]] on unrelated parallel machines.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The design and [[mathematical analysis|analysis]] of approximation algorithms crucially involves a [[mathematical proof]] certifying the quality of the returned solutions in the worst case.&amp;lt;ref name=&amp;quot;Bernard. 2011&amp;quot;/&amp;gt; This distinguishes them from [[heuristic (computer science)|heuristics]] such as [[Simulated annealing|annealing]] or [[genetic algorithm]]s, which find reasonably good solutions on some inputs, but provide no clear indication at the outset on when they may succeed or fail.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The design and [[mathematical analysis|analysis]] of approximation algorithms crucially involves a [[mathematical proof]] certifying the quality of the returned solutions in the worst case.&amp;lt;ref name=&amp;quot;Bernard. 2011&amp;quot;/&amp;gt; This distinguishes them from [[heuristic (computer science)|heuristics]] such as [[Simulated annealing|annealing]] or [[genetic algorithm]]s, which find reasonably good solutions on some inputs, but provide no clear indication at the outset on when they may succeed or fail.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l16&quot;&gt;Line 16:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 16:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# [[Local search (optimization)|Local search]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# [[Local search (optimization)|Local search]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# Enumeration and [[dynamic programming]] (which is also often used for [[Parameterized approximation algorithm|parameterized approximations]])&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# Enumeration and [[dynamic programming]] (which is also often used for [[Parameterized approximation algorithm|parameterized approximations]])&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Solving &lt;/del&gt;a [[convex programming]] &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;relaxation &lt;/del&gt;to &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;get &lt;/del&gt;a &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;fractional solution&lt;/del&gt;. Then &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;converting this &lt;/del&gt;fractional solution into &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;a feasible &lt;/del&gt;solution &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;by some appropriate &lt;/del&gt;rounding&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;. The popular relaxations &lt;/del&gt;include &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;the following.&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Mathematical programming techniques. This involves modeling the considered problem with an appropriate mathematical programming formulation (typically &lt;/ins&gt;a [[convex programming]]&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;) such as [[Linear programming]], [[Semidefinite programming]], etc, &lt;/ins&gt;to &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;obtain &lt;/ins&gt;a &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;relaxation of solutions for the problem&lt;/ins&gt;. Then &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;further algorithmic techniques for these formulations are applied. &lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;#* &lt;/del&gt;[[&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Linear programming&lt;/del&gt;]] &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;relaxations&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;#* Rounding-based methods. This involves solving the considered formulation for a good &lt;/ins&gt;fractional solution &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;and then converting it &lt;/ins&gt;into &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;an integer &lt;/ins&gt;solution&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;. Typical &lt;/ins&gt;rounding &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;techniques &lt;/ins&gt;include &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;simple threshold rounding, &lt;/ins&gt;[[&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Randomized rounding&lt;/ins&gt;]]&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;, &lt;/ins&gt;[[&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Iterative rounding&lt;/ins&gt;]]&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;, etc.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;#* &lt;/del&gt;[[&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Semidefinite programming&lt;/del&gt;]] &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;relaxations&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;#&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;* Dual-fitting methods. This involves interpreting an intended combinatorial-based algorithm (typically a greedy one) as the process of computing a feasible solution for the dual program of the considered formulation.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# Primal-dual methods&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;#* &lt;/ins&gt;Primal-dual methods&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;# Dual fitting&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-added&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# Embedding the problem in some metric and then solving the problem on the metric. This is also known as metric embedding.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# Embedding the problem in some metric and then solving the problem on the metric. This is also known as metric embedding.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# Random sampling and the use of randomness in general in conjunction with the methods above.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# Random sampling and the use of randomness in general in conjunction with the methods above.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l36&quot;&gt;Line 36:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 35:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Not all approximation algorithms are suitable for direct practical applications. Some involve solving non-trivial [[Linear programming relaxation|linear programming]]/[[Semidefinite programming|semidefinite]] relaxations (which may themselves invoke the [[Ellipsoid method|ellipsoid algorithm]]), complex data structures, or sophisticated algorithmic techniques, leading to difficult implementation issues or improved running time performance (over exact algorithms) only on impractically large inputs. Implementation and running time issues aside, the guarantees provided by approximation algorithms may themselves not be strong enough to justify their consideration in practice. Despite their inability to be used &amp;quot;out of the box&amp;quot; in practical applications, the ideas and insights behind the design of such algorithms can often be incorporated in other ways in practical algorithms. In this way, the study of even very expensive algorithms is not a completely theoretical pursuit as they can yield valuable insights.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Not all approximation algorithms are suitable for direct practical applications. Some involve solving non-trivial [[Linear programming relaxation|linear programming]]/[[Semidefinite programming|semidefinite]] relaxations (which may themselves invoke the [[Ellipsoid method|ellipsoid algorithm]]), complex data structures, or sophisticated algorithmic techniques, leading to difficult implementation issues or improved running time performance (over exact algorithms) only on impractically large inputs. Implementation and running time issues aside, the guarantees provided by approximation algorithms may themselves not be strong enough to justify their consideration in practice. Despite their inability to be used &amp;quot;out of the box&amp;quot; in practical applications, the ideas and insights behind the design of such algorithms can often be incorporated in other ways in practical algorithms. In this way, the study of even very expensive algorithms is not a completely theoretical pursuit as they can yield valuable insights.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;In other cases, even if the initial results are of purely theoretical interest, over time, with an improved understanding, the algorithms may be refined to become more practical. One such example is the initial PTAS for [[Euclidean traveling salesman problem|Euclidean TSP]] by [[Sanjeev Arora]] (and independently by [[Joseph S. B. Mitchell|Joseph Mitchell]]) which had a prohibitive running time of &amp;lt;math&amp;gt;n^{O(1/\epsilon)}&amp;lt;/math&amp;gt; for a &amp;lt;math&amp;gt;1+\epsilon&amp;lt;/math&amp;gt; approximation.&amp;lt;ref&amp;gt;{{Cite book|last=Arora|first=S.|title=Proceedings of 37th Conference on Foundations of Computer Science |chapter=Polynomial time approximation schemes for Euclidean TSP and other geometric problems |date=October 1996|pages=2–11|doi=10.1109/SFCS.1996.548458|isbn=978-0-8186-7594-2|citeseerx=10.1.1.32.3376|s2cid=1499391}}&amp;lt;/ref&amp;gt; Yet, within a year these ideas were incorporated into a near-linear time &amp;lt;math&amp;gt;O(n\log n)&amp;lt;/math&amp;gt; algorithm for any constant &amp;lt;math&amp;gt;\epsilon &amp;gt; 0&amp;lt;/math&amp;gt;.&amp;lt;ref&amp;gt;{{Cite book|last=Arora|first=S.|title=Proceedings 38th Annual Symposium on Foundations of Computer Science |date=October 1997|chapter=Nearly linear time approximation schemes for Euclidean TSP and other geometric problems|pages=554–563|doi=10.1109/SFCS.1997.646145|isbn=978-0-8186-8197-4|s2cid=10656723}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;In other cases, even if the initial results are of purely theoretical interest, over time, with an improved understanding, the algorithms may be refined to become more practical. One such example is the initial PTAS for [[Euclidean traveling salesman problem|Euclidean TSP]] by [[&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Sanjeev Arora (computer scientist)|&lt;/ins&gt;Sanjeev Arora]] (and independently by [[Joseph S. B. Mitchell|Joseph Mitchell]]) which had a prohibitive running time of &amp;lt;math&amp;gt;n^{O(1/\epsilon)}&amp;lt;/math&amp;gt; for a &amp;lt;math&amp;gt;1+\epsilon&amp;lt;/math&amp;gt; approximation.&amp;lt;ref&amp;gt;{{Cite book|last=Arora|first=S.|title=Proceedings of 37th Conference on Foundations of Computer Science |chapter=Polynomial time approximation schemes for Euclidean TSP and other geometric problems |date=October 1996|pages=2–11|doi=10.1109/SFCS.1996.548458|isbn=978-0-8186-7594-2|citeseerx=10.1.1.32.3376|s2cid=1499391}}&amp;lt;/ref&amp;gt; Yet, within a year these ideas were incorporated into a near-linear time &amp;lt;math&amp;gt;O(n\log n)&amp;lt;/math&amp;gt; algorithm for any constant &amp;lt;math&amp;gt;\epsilon &amp;gt; 0&amp;lt;/math&amp;gt;.&amp;lt;ref&amp;gt;{{Cite book|last=Arora|first=S.|title=Proceedings 38th Annual Symposium on Foundations of Computer Science |date=October 1997|chapter=Nearly linear time approximation schemes for Euclidean TSP and other geometric problems|pages=554–563|doi=10.1109/SFCS.1997.646145|isbn=978-0-8186-8197-4|s2cid=10656723}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Structure of approximation algorithms ==&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Structure of approximation algorithms ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>imported&gt;Cmuravi</name></author>
	</entry>
	<entry>
		<id>http://debianws.lexgopc.com/wiki143/index.php?title=Approximation_algorithm&amp;diff=343336&amp;oldid=prev</id>
		<title>2001:638:708:306:290E:2B4C:E37E:90EA: /* Performance guarantees */ fixed a typo</title>
		<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Approximation_algorithm&amp;diff=343336&amp;oldid=prev"/>
		<updated>2025-04-25T12:31:44Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;Performance guarantees: &lt;/span&gt; fixed a typo&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{Short description|Class of algorithms that find approximate solutions to optimization problems}}&lt;br /&gt;
In [[computer science]] and [[operations research]], &amp;#039;&amp;#039;&amp;#039;approximation algorithms&amp;#039;&amp;#039;&amp;#039; are [[Time complexity#Polynomial time|efficient]] [[algorithm]]s that find [[approximation|approximate]] solutions to [[optimization problem]]s (in particular [[NP-hardness|NP-hard]] problems) with provable guarantees on the distance of the returned solution to the optimal one.&amp;lt;ref name=&amp;quot;Bernard. 2011&amp;quot;&amp;gt;{{Cite book|title=The design of approximation algorithms|last=Bernard.|first=Shmoys, David|date=2011|publisher=Cambridge University Press|isbn=9780521195270|oclc=671709856|url=http://www.designofapproxalgs.com/}}&amp;lt;/ref&amp;gt; Approximation algorithms naturally arise in the field of [[theoretical computer science]] as a consequence of the widely believed [[P versus NP problem|P ≠ NP]] conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in [[Time complexity|polynomial time]]. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are also many approximation algorithms that provide an additive guarantee on the quality of the returned solution. A notable example of an approximation algorithm that provides &amp;#039;&amp;#039;both&amp;#039;&amp;#039; is the classic approximation algorithm of [[Jan Karel Lenstra|Lenstra]], [[David Shmoys|Shmoys]] and [[Éva Tardos|Tardos]]&amp;lt;ref&amp;gt;{{Cite journal|last1=Lenstra|first1=Jan Karel|last2=Shmoys|first2=David B.|last3=Tardos|first3=Éva|date=1990-01-01|title=Approximation algorithms for scheduling unrelated parallel machines|journal=Mathematical Programming|language=en|volume=46|issue=1–3|pages=259–271|doi=10.1007/BF01585745|issn=0025-5610|citeseerx=10.1.1.115.708|s2cid=206799898}}&amp;lt;/ref&amp;gt; for [[Scheduling (computing)|scheduling]] on unrelated parallel machines.&lt;br /&gt;
&lt;br /&gt;
The design and [[mathematical analysis|analysis]] of approximation algorithms crucially involves a [[mathematical proof]] certifying the quality of the returned solutions in the worst case.&amp;lt;ref name=&amp;quot;Bernard. 2011&amp;quot;/&amp;gt; This distinguishes them from [[heuristic (computer science)|heuristics]] such as [[Simulated annealing|annealing]] or [[genetic algorithm]]s, which find reasonably good solutions on some inputs, but provide no clear indication at the outset on when they may succeed or fail.&lt;br /&gt;
&lt;br /&gt;
There is widespread interest in [[theoretical computer science]] to better understand the limits to which we can approximate certain famous optimization problems. For example, one of the long-standing open questions in computer science is to determine whether there is an algorithm that outperforms the 2-approximation for the Steiner Forest problem by Agrawal et al.&amp;lt;ref&amp;gt;{{Cite book |last1=Agrawal |first1=Ajit |last2=Klein |first2=Philip |last3=Ravi |first3=R. |title=Proceedings of the twenty-third annual ACM symposium on Theory of computing - STOC &amp;#039;91 |chapter=When trees collide |date=1991 |chapter-url=http://portal.acm.org/citation.cfm?doid=103418.103437 |language=en |location=New Orleans, Louisiana, United States |publisher=ACM Press |pages=134–144 |doi=10.1145/103418.103437 |isbn=978-0-89791-397-3|s2cid=1245448 }}&amp;lt;/ref&amp;gt; The desire to understand hard optimization problems from the perspective of approximability is motivated by the discovery of surprising mathematical connections and broadly applicable techniques to design algorithms for hard optimization problems. One well-known example of the former is the [[Semidefinite programming#Example 3 (Goemans–Williamson max cut approximation algorithm)|Goemans–Williamson algorithm]] for [[maximum cut]], which solves a graph theoretic problem using high dimensional geometry.&amp;lt;ref&amp;gt;{{Cite journal|last1=Goemans|first1=Michel X.|last2=Williamson|first2=David P.|date=November 1995|title=Improved Approximation Algorithms for Maximum Cut and Satisfiability Problems Using Semidefinite Programming|journal=J. ACM|volume=42|issue=6|pages=1115–1145|doi=10.1145/227683.227684|issn=0004-5411|citeseerx=10.1.1.34.8500|s2cid=15794408}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
A simple example of an approximation algorithm is one for the [[Vertex cover|minimum vertex cover]] problem, where the goal is to choose the smallest set of vertices such that every edge in the input graph contains at least one chosen vertex. One way to find a [[vertex cover]] is to repeat the following process: find an uncovered edge, add both its endpoints to the cover, and remove all edges incident to either vertex from the graph. As any vertex cover of the input graph must use a distinct vertex to cover each edge that was considered in the process (since it forms a [[Matching (graph theory)|matching]]), the vertex cover produced, therefore, is at most twice as large as the optimal one. In other words, this is a [[constant-factor approximation algorithm]] with an approximation factor of 2. Under the recent [[unique games conjecture]], this factor is even the best possible one.&amp;lt;ref&amp;gt;{{Cite journal|last1=Khot|first1=Subhash|last2=Regev|first2=Oded|date=2008-05-01|title=Vertex cover might be hard to approximate to within 2−ε|journal=Journal of Computer and System Sciences|series=Computational Complexity 2003|volume=74|issue=3|pages=335–349|doi=10.1016/j.jcss.2007.06.019|doi-access=free}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NP-hard problems vary greatly in their approximability; some, such as the [[knapsack problem]], can be approximated within a multiplicative factor &amp;lt;math&amp;gt;1 + \epsilon&amp;lt;/math&amp;gt;, for any fixed &amp;lt;math&amp;gt;\epsilon &amp;gt; 0&amp;lt;/math&amp;gt;, and therefore produce solutions arbitrarily close to the optimum (such a family of approximation algorithms is called a [[polynomial-time approximation scheme]] or PTAS). Others are impossible to approximate within any constant, or even polynomial, factor unless [[P = NP]], as in the case of the [[Clique problem|maximum clique problem]]. Therefore, an important benefit of studying approximation algorithms is a fine-grained classification of the difficulty of various NP-hard problems beyond the one afforded by the [[NP-completeness|theory of NP-completeness]]. In other words, although NP-complete problems may be equivalent (under polynomial-time reductions) to each other from the perspective of exact solutions, the corresponding optimization problems behave very differently from the perspective of approximate solutions.&lt;br /&gt;
&lt;br /&gt;
==Algorithm design techniques==&lt;br /&gt;
By now there are several established techniques to design approximation algorithms. These include the following ones.&lt;br /&gt;
# [[Greedy algorithm]]&lt;br /&gt;
# [[Local search (optimization)|Local search]]&lt;br /&gt;
# Enumeration and [[dynamic programming]] (which is also often used for [[Parameterized approximation algorithm|parameterized approximations]])&lt;br /&gt;
# Solving a [[convex programming]] relaxation to get a fractional solution. Then converting this fractional solution into a feasible solution by some appropriate rounding. The popular relaxations include the following.&lt;br /&gt;
#* [[Linear programming]] relaxations&lt;br /&gt;
#* [[Semidefinite programming]] relaxations&lt;br /&gt;
# Primal-dual methods&lt;br /&gt;
# Dual fitting&lt;br /&gt;
# Embedding the problem in some metric and then solving the problem on the metric. This is also known as metric embedding.&lt;br /&gt;
# Random sampling and the use of randomness in general in conjunction with the methods above.&lt;br /&gt;
&lt;br /&gt;
== A posteriori guarantees ==&lt;br /&gt;
While approximation algorithms always provide an a priori worst case guarantee (be it additive or multiplicative), in some cases they also provide an a posteriori guarantee that is often much better. This is often the case for algorithms that work by solving a [[Convex programming|convex relaxation]] of the optimization problem on the given input. For example, there is a different approximation algorithm for minimum vertex cover that solves a [[linear programming relaxation]] to find a vertex cover that is at most twice the value of the relaxation. Since the value of the relaxation is never larger than the size of the optimal vertex cover, this yields another 2-approximation algorithm. While this is similar to the a priori guarantee of the previous approximation algorithm, the guarantee of the latter can be much better (indeed when the value of the LP relaxation is far from the size of the optimal vertex cover).&lt;br /&gt;
&lt;br /&gt;
== Hardness of approximation ==&lt;br /&gt;
Approximation algorithms as a research area is closely related to and informed by [[Hardness of approximation|inapproximability theory]] where the non-existence of efficient algorithms with certain approximation ratios is proved (conditioned on widely believed hypotheses such as the P ≠ NP conjecture) by means of [[Reduction (complexity)|reductions]]. In the case of the metric traveling salesman problem, the best known inapproximability result rules out algorithms with an approximation ratio less than 123/122 ≈ 1.008196 unless P = NP, Karpinski, Lampis, Schmied.&amp;lt;ref&amp;gt;{{Cite journal|last1=Karpinski|first1=Marek|last2=Lampis|first2=Michael|last3=Schmied|first3=Richard|date=2015-12-01|title=New inapproximability bounds for TSP|journal=Journal of Computer and System Sciences|volume=81|issue=8|pages=1665–1677|doi=10.1016/j.jcss.2015.06.003|arxiv=1303.6437}}&amp;lt;/ref&amp;gt; Coupled with the knowledge of the existence of Christofides&amp;#039; 1.5 approximation algorithm, this tells us that the threshold of approximability for metric traveling salesman (if it exists) is somewhere between 123/122 and 1.5.&lt;br /&gt;
&lt;br /&gt;
While inapproximability results have been proved since the 1970s, such results were obtained by ad hoc means and no systematic understanding was available at the time. It is only since the 1990 result of Feige, Goldwasser, Lovász, Safra and Szegedy on the inapproximability of [[Independent set (graph theory)|Independent Set]]&amp;lt;ref&amp;gt;{{Cite journal|last1=Feige|first1=Uriel|last2=Goldwasser|first2=Shafi|last3=Lovász|first3=Laszlo|last4=Safra|first4=Shmuel|last5=Szegedy|first5=Mario|date=March 1996|title=Interactive Proofs and the Hardness of Approximating Cliques|journal=J. ACM|volume=43|issue=2|pages=268–292|doi=10.1145/226643.226652|issn=0004-5411|doi-access=free}}&amp;lt;/ref&amp;gt; and the famous [[PCP theorem]],&amp;lt;ref&amp;gt;{{Cite journal|last1=Arora|first1=Sanjeev|last2=Safra|first2=Shmuel|date=January 1998|title=Probabilistic Checking of Proofs: A New Characterization of NP|journal=J. ACM|volume=45|issue=1|pages=70–122|doi=10.1145/273865.273901|s2cid=751563|issn=0004-5411|doi-access=free}}&amp;lt;/ref&amp;gt; that modern tools for proving inapproximability results were uncovered. The PCP theorem, for example, shows that [[David S. Johnson|Johnson&amp;#039;s]] 1974 approximation algorithms for [[Maximum satisfiability problem|Max SAT]], [[Set cover problem|set cover]], [[Independent set (graph theory)|independent set]] and [[Graph coloring|coloring]] all achieve the optimal approximation ratio, assuming P ≠ NP.&amp;lt;ref&amp;gt;{{Cite journal|last=Johnson|first=David S.|date=1974-12-01|title=Approximation algorithms for combinatorial problems|journal=Journal of Computer and System Sciences|volume=9|issue=3|pages=256–278|doi=10.1016/S0022-0000(74)80044-9|doi-access=free}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Practicality ==&lt;br /&gt;
{{seealso|Galactic algorithm}}&lt;br /&gt;
Not all approximation algorithms are suitable for direct practical applications. Some involve solving non-trivial [[Linear programming relaxation|linear programming]]/[[Semidefinite programming|semidefinite]] relaxations (which may themselves invoke the [[Ellipsoid method|ellipsoid algorithm]]), complex data structures, or sophisticated algorithmic techniques, leading to difficult implementation issues or improved running time performance (over exact algorithms) only on impractically large inputs. Implementation and running time issues aside, the guarantees provided by approximation algorithms may themselves not be strong enough to justify their consideration in practice. Despite their inability to be used &amp;quot;out of the box&amp;quot; in practical applications, the ideas and insights behind the design of such algorithms can often be incorporated in other ways in practical algorithms. In this way, the study of even very expensive algorithms is not a completely theoretical pursuit as they can yield valuable insights.&lt;br /&gt;
&lt;br /&gt;
In other cases, even if the initial results are of purely theoretical interest, over time, with an improved understanding, the algorithms may be refined to become more practical. One such example is the initial PTAS for [[Euclidean traveling salesman problem|Euclidean TSP]] by [[Sanjeev Arora]] (and independently by [[Joseph S. B. Mitchell|Joseph Mitchell]]) which had a prohibitive running time of &amp;lt;math&amp;gt;n^{O(1/\epsilon)}&amp;lt;/math&amp;gt; for a &amp;lt;math&amp;gt;1+\epsilon&amp;lt;/math&amp;gt; approximation.&amp;lt;ref&amp;gt;{{Cite book|last=Arora|first=S.|title=Proceedings of 37th Conference on Foundations of Computer Science |chapter=Polynomial time approximation schemes for Euclidean TSP and other geometric problems |date=October 1996|pages=2–11|doi=10.1109/SFCS.1996.548458|isbn=978-0-8186-7594-2|citeseerx=10.1.1.32.3376|s2cid=1499391}}&amp;lt;/ref&amp;gt; Yet, within a year these ideas were incorporated into a near-linear time &amp;lt;math&amp;gt;O(n\log n)&amp;lt;/math&amp;gt; algorithm for any constant &amp;lt;math&amp;gt;\epsilon &amp;gt; 0&amp;lt;/math&amp;gt;.&amp;lt;ref&amp;gt;{{Cite book|last=Arora|first=S.|title=Proceedings 38th Annual Symposium on Foundations of Computer Science |date=October 1997|chapter=Nearly linear time approximation schemes for Euclidean TSP and other geometric problems|pages=554–563|doi=10.1109/SFCS.1997.646145|isbn=978-0-8186-8197-4|s2cid=10656723}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Structure of approximation algorithms ==&lt;br /&gt;
Given an optimization problem:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pi: I \times S&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\Pi&amp;lt;/math&amp;gt; is an approximation problem, &amp;lt;math&amp;gt;I&amp;lt;/math&amp;gt; the set of inputs and &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; the set of solutions, we can define the cost function:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;c: S \rightarrow \mathbb{R}^+&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and the set of feasible solutions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\forall i \in I, S(i) = {s \in S: i\Pi_s}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
finding the best solution &amp;lt;math&amp;gt;s^*&amp;lt;/math&amp;gt; for a maximization or a minimization problem:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s^* \in S(i)&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;c(s^*) = min/max \ c(S(i))&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Given a feasible solution &amp;lt;math&amp;gt;s \in S(i)&amp;lt;/math&amp;gt;, with &amp;lt;math&amp;gt;s \neq s^*&amp;lt;/math&amp;gt;, we would want a guarantee of the quality of the solution, which is a performance to be guaranteed (approximation factor).&lt;br /&gt;
&lt;br /&gt;
Specifically, having &amp;lt;math&amp;gt;A_{\Pi}(i) \in S_i&amp;lt;/math&amp;gt;, the algorithm has an &amp;#039;&amp;#039;&amp;#039;approximation factor&amp;#039;&amp;#039;&amp;#039; (or &amp;#039;&amp;#039;&amp;#039;approximation ratio&amp;#039;&amp;#039;&amp;#039;) of &amp;lt;math&amp;gt;\rho(n)&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;\forall i \in I \ s.t. |i| = n&amp;lt;/math&amp;gt;, we have:&lt;br /&gt;
&lt;br /&gt;
* for a &amp;#039;&amp;#039;minimization&amp;#039;&amp;#039; problem: &amp;lt;math&amp;gt;\frac{c(A_{\Pi}(i))}{c(s^*(i))}\leq\rho(n)&amp;lt;/math&amp;gt;, which in turn means the solution taken by the algorithm divided by the optimal solution achieves a ratio of &amp;lt;math&amp;gt;\rho(n)&amp;lt;/math&amp;gt;;&lt;br /&gt;
* for a &amp;#039;&amp;#039;maximization&amp;#039;&amp;#039; problem: &amp;lt;math&amp;gt;\frac{c(s^*(i))}{c(A_{\Pi}(i))}\leq\rho(n)&amp;lt;/math&amp;gt;, which in turn means the optimal solution divided by the solution taken by the algorithm achieves a ratio of &amp;lt;math&amp;gt;\rho(n)&amp;lt;/math&amp;gt;;&lt;br /&gt;
&lt;br /&gt;
The approximation can be proven &amp;#039;&amp;#039;tight&amp;#039;&amp;#039; (&amp;#039;&amp;#039;tight approximation&amp;#039;&amp;#039;) by demonstrating that there exist instances where the algorithm performs at the approximation limit, indicating the tightness of the bound. In this case, it&amp;#039;s enough to construct an input instance designed to force the algorithm into a worst-case scenario.&lt;br /&gt;
&lt;br /&gt;
== Performance guarantees ==&lt;br /&gt;
For some approximation algorithms it is possible to prove certain properties about the approximation of the optimum result. For example, a &amp;#039;&amp;#039;&amp;#039;&amp;#039;&amp;#039;ρ&amp;#039;&amp;#039;-approximation algorithm&amp;#039;&amp;#039;&amp;#039; &amp;#039;&amp;#039;A&amp;#039;&amp;#039; is defined to be an algorithm for which it has been proven that the value/cost, &amp;#039;&amp;#039;f&amp;#039;&amp;#039;(&amp;#039;&amp;#039;x&amp;#039;&amp;#039;), of the approximate solution &amp;#039;&amp;#039;A&amp;#039;&amp;#039;(&amp;#039;&amp;#039;x&amp;#039;&amp;#039;) to an instance &amp;#039;&amp;#039;x&amp;#039;&amp;#039; will not be more (or less, depending on the situation) than a factor &amp;#039;&amp;#039;ρ&amp;#039;&amp;#039; times the value, OPT, of an optimum solution.&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{cases}\mathrm{OPT} \leq f(x) \leq \rho \mathrm{OPT},\qquad\mbox{if } \rho &amp;gt; 1; \\ \rho \mathrm{OPT} \leq f(x) \leq \mathrm{OPT},\qquad\mbox{if } \rho &amp;lt; 1.\end{cases}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The factor &amp;#039;&amp;#039;ρ&amp;#039;&amp;#039; is called the &amp;#039;&amp;#039;relative performance guarantee&amp;#039;&amp;#039;. An approximation algorithm has an &amp;#039;&amp;#039;absolute performance guarantee&amp;#039;&amp;#039; or &amp;#039;&amp;#039;bounded error&amp;#039;&amp;#039; &amp;#039;&amp;#039;c&amp;#039;&amp;#039;, if it has been proven for every instance &amp;#039;&amp;#039;x&amp;#039;&amp;#039; that&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt; (\mathrm{OPT} - c) \leq f(x) \leq (\mathrm{OPT} + c).&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, the &amp;#039;&amp;#039;performance guarantee&amp;#039;&amp;#039;, &amp;#039;&amp;#039;R&amp;#039;&amp;#039;(&amp;#039;&amp;#039;x,y&amp;#039;&amp;#039;), of a solution &amp;#039;&amp;#039;y&amp;#039;&amp;#039; to an instance &amp;#039;&amp;#039;x&amp;#039;&amp;#039; is defined as&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;R(x,y) =  \max \left ( \frac{OPT}{f(y)}, \frac{f(y)}{OPT} \right ),&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;#039;&amp;#039;f&amp;#039;&amp;#039;(&amp;#039;&amp;#039;y&amp;#039;&amp;#039;) is the value/cost of the solution &amp;#039;&amp;#039;y&amp;#039;&amp;#039; for the instance &amp;#039;&amp;#039;x&amp;#039;&amp;#039;. Clearly, the performance guarantee is greater than or equal to 1 and equal to 1 if and only if &amp;#039;&amp;#039;y&amp;#039;&amp;#039; is an optimal solution. If an algorithm &amp;#039;&amp;#039;A&amp;#039;&amp;#039; guarantees to return solutions with a performance guarantee of at most &amp;#039;&amp;#039;r&amp;#039;&amp;#039;(&amp;#039;&amp;#039;n&amp;#039;&amp;#039;), then &amp;#039;&amp;#039;A&amp;#039;&amp;#039; is said to be an &amp;#039;&amp;#039;r&amp;#039;&amp;#039;(&amp;#039;&amp;#039;n&amp;#039;&amp;#039;)-approximation algorithm and has an &amp;#039;&amp;#039;approximation ratio&amp;#039;&amp;#039; of &amp;#039;&amp;#039;r&amp;#039;&amp;#039;(&amp;#039;&amp;#039;n&amp;#039;&amp;#039;). Likewise, a problem with an &amp;#039;&amp;#039;r&amp;#039;&amp;#039;(&amp;#039;&amp;#039;n&amp;#039;&amp;#039;)-approximation algorithm is said to be r&amp;#039;&amp;#039;(&amp;#039;&amp;#039;n&amp;#039;&amp;#039;)&amp;#039;&amp;#039;-&amp;#039;&amp;#039;approximable&amp;#039;&amp;#039; or have an approximation ratio of &amp;#039;&amp;#039;r&amp;#039;&amp;#039;(&amp;#039;&amp;#039;n&amp;#039;&amp;#039;).&amp;lt;ref name=ausiello99complexity&amp;gt;{{cite book|title=Complexity and Approximation: Combinatorial Optimization Problems and their Approximability Properties|year=1999|author1=G. Ausiello |author2=P. Crescenzi |author3=G. Gambosi |author4=V. Kann |author5=A. Marchetti-Spaccamela |author6=M. Protasi }}&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;kann92onthe&amp;quot;&amp;gt;{{cite book|title=On the Approximability of NP-complete Optimization Problems|author=Viggo Kann|year=1992|url=https://www.csc.kth.se/~viggo/papers/phdthesis.pdf}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For minimization problems, the two different guarantees provide the same result and that for maximization problems, a relative performance guarantee of ρ is equivalent to a performance guarantee of &amp;lt;math&amp;gt;r = \rho^{-1}&amp;lt;/math&amp;gt;. In the literature, both definitions are common but it is clear which definition is used since, for maximization problems, as ρ ≤ 1 while r ≥ 1.&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;absolute performance guarantee&amp;#039;&amp;#039; &amp;lt;math&amp;gt;\Rho_A&amp;lt;/math&amp;gt; of some approximation algorithm &amp;#039;&amp;#039;A&amp;#039;&amp;#039;, where &amp;#039;&amp;#039;x&amp;#039;&amp;#039; refers to an instance of a problem, and where &amp;lt;math&amp;gt;R_A(x)&amp;lt;/math&amp;gt; is the performance guarantee of &amp;#039;&amp;#039;A&amp;#039;&amp;#039; on &amp;#039;&amp;#039;x&amp;#039;&amp;#039; (i.e. ρ for problem instance &amp;#039;&amp;#039;x&amp;#039;&amp;#039;) is:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt; \Rho_A = \inf \{ r \geq 1 \mid R_A(x) \leq r, \forall x \}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That is to say that &amp;lt;math&amp;gt;\Rho_A&amp;lt;/math&amp;gt; is the largest bound on the approximation ratio, &amp;#039;&amp;#039;r&amp;#039;&amp;#039;, that one sees over all possible instances of the problem. Likewise, the &amp;#039;&amp;#039;asymptotic performance ratio&amp;#039;&amp;#039; &amp;lt;math&amp;gt;R_A^\infty&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt; R_A^\infty = \inf \{ r \geq 1 \mid \exists n \in \mathbb{Z}^+, R_A(x) \leq r, \forall x, |x| \geq n\}. &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That is to say that it is the same as the &amp;#039;&amp;#039;absolute performance ratio&amp;#039;&amp;#039;, with a lower bound &amp;#039;&amp;#039;n&amp;#039;&amp;#039; on the size of problem instances. These two types of ratios are used because there exist algorithms where the difference between these two is significant.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Performance guarantees&lt;br /&gt;
|-&lt;br /&gt;
!  !! &amp;#039;&amp;#039;r&amp;#039;&amp;#039;-approx&amp;lt;ref name=&amp;quot;ausiello99complexity&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;kann92onthe&amp;quot;/&amp;gt; !! &amp;#039;&amp;#039;ρ&amp;#039;&amp;#039;-approx !! rel. error&amp;lt;ref name=&amp;quot;kann92onthe&amp;quot;/&amp;gt; !! rel. error&amp;lt;ref name=&amp;quot;ausiello99complexity&amp;quot;/&amp;gt; !! norm. rel. error&amp;lt;ref name=&amp;quot;ausiello99complexity&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;kann92onthe&amp;quot;/&amp;gt; !! abs. error&amp;lt;ref name=&amp;quot;ausiello99complexity&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;kann92onthe&amp;quot;/&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! max: &amp;lt;math&amp;gt;f(x) \geq&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;r^{-1} \mathrm{OPT}&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\rho \mathrm{OPT}&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;(1-c)\mathrm{OPT}&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;(1-c)\mathrm{OPT}&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;(1-c)\mathrm{OPT} + c\mathrm{WORST}&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\mathrm{OPT} - c&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! min: &amp;lt;math&amp;gt;f(x) \leq&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;r \mathrm{OPT}&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\rho \mathrm{OPT}&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;(1+c)\mathrm{OPT}&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;(1-c)^{-1}\mathrm{OPT}&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;(1-c)^{-1} \mathrm{OPT} + c\mathrm{WORST}&amp;lt;/math&amp;gt; || &amp;lt;math&amp;gt;\mathrm{OPT} + c&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Epsilon terms ==&lt;br /&gt;
In the literature, an approximation ratio for a maximization (minimization) problem of &amp;#039;&amp;#039;c&amp;#039;&amp;#039; - ϵ (min: &amp;#039;&amp;#039;c&amp;#039;&amp;#039; + ϵ) means that the algorithm has an approximation ratio of &amp;#039;&amp;#039;c&amp;#039;&amp;#039; ∓ ϵ  for arbitrary ϵ &amp;gt; 0 but that the ratio has not (or cannot) be shown for ϵ = 0. An example of this is the optimal inapproximability — inexistence of approximation — ratio of 7 / 8 + ϵ for satisfiable [[MAX-3SAT]] instances due to [[Johan Håstad]].&amp;lt;ref name=&amp;quot;hastad99someoptimal&amp;quot;&amp;gt;{{cite journal|title=Some Optimal Inapproximability Results|journal=Journal of the ACM|volume=48|issue=4|pages=798–859|year=1999|url=http://www.nada.kth.se/~johanh/optimalinap.ps|author=Johan Håstad|doi=10.1145/502090.502098|citeseerx=10.1.1.638.2808|s2cid=5120748|author-link=Johan Håstad}}&amp;lt;/ref&amp;gt; As mentioned previously, when &amp;#039;&amp;#039;c&amp;#039;&amp;#039; = 1, the problem is said to have a [[polynomial-time approximation scheme]].&lt;br /&gt;
&lt;br /&gt;
An ϵ-term may appear when an approximation algorithm introduces a multiplicative error and a constant error while the minimum optimum of instances of size &amp;#039;&amp;#039;n&amp;#039;&amp;#039; goes to infinity as &amp;#039;&amp;#039;n&amp;#039;&amp;#039; does. In this case, the approximation ratio is &amp;#039;&amp;#039;c&amp;#039;&amp;#039; ∓ &amp;#039;&amp;#039;k&amp;#039;&amp;#039; / OPT = &amp;#039;&amp;#039;c&amp;#039;&amp;#039; ∓ o(1) for some constants &amp;#039;&amp;#039;c&amp;#039;&amp;#039; and &amp;#039;&amp;#039;k&amp;#039;&amp;#039;. Given arbitrary ϵ &amp;gt; 0, one can choose a large enough &amp;#039;&amp;#039;N&amp;#039;&amp;#039; such that the term &amp;#039;&amp;#039;k&amp;#039;&amp;#039; / OPT &amp;lt; ϵ for every &amp;#039;&amp;#039;n ≥ N&amp;#039;&amp;#039;. For every fixed ϵ, instances of size &amp;#039;&amp;#039;n &amp;lt; N&amp;#039;&amp;#039; can be solved by brute force, thereby showing an approximation ratio — existence of approximation algorithms with a guarantee — of &amp;#039;&amp;#039;c&amp;#039;&amp;#039; ∓ ϵ for every ϵ &amp;gt; 0.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Domination analysis]] considers guarantees in terms of the rank of the computed solution.&lt;br /&gt;
* [[Polynomial-time approximation scheme|PTAS]] - a type of approximation algorithm that takes the approximation ratio as a parameter&lt;br /&gt;
* [[Parameterized approximation algorithm]] - a type of approximation algorithm that runs in [[Fixed-parameter algorithm|FPT]] time&lt;br /&gt;
* [[APX]] is the class of problems with some constant-factor approximation algorithm&lt;br /&gt;
* [[Approximation-preserving reduction]]&lt;br /&gt;
* [[Exact algorithm]]&lt;br /&gt;
&lt;br /&gt;
==Citations==&lt;br /&gt;
{{More footnotes|date=April 2009}}&lt;br /&gt;
{{reflist}}&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* {{cite book&lt;br /&gt;
  | last = Vazirani&lt;br /&gt;
  | first = Vijay V.&lt;br /&gt;
  | author-link = Vijay Vazirani&lt;br /&gt;
  | title = Approximation Algorithms&lt;br /&gt;
  | publisher = Springer&lt;br /&gt;
  | year = 2003&lt;br /&gt;
  | location = Berlin&lt;br /&gt;
  | isbn = 978-3-540-65367-7 }}&lt;br /&gt;
* [[Thomas H. Cormen]], [[Charles E. Leiserson]], [[Ronald L. Rivest]], and [[Clifford Stein]]. &amp;#039;&amp;#039;[[Introduction to Algorithms]]&amp;#039;&amp;#039;, Second Edition. MIT Press and McGraw-Hill, 2001. {{ISBN|0-262-03293-7}}. Chapter 35: Approximation Algorithms, pp.&amp;amp;nbsp;1022&amp;amp;ndash;1056.&lt;br /&gt;
* [[Dorit S. Hochbaum]], ed. &amp;#039;&amp;#039;[[Approximation Algorithms for NP-Hard problems]]&amp;#039;&amp;#039;, PWS Publishing Company, 1997. {{ISBN|0-534-94968-1}}. Chapter 9: Various Notions of Approximations: Good, Better, Best, and More&lt;br /&gt;
*{{Citation|last1=Williamson|first1=David P.|author1-link=David P. Williamson|last2=Shmoys|first2=David B.|author-link2=David Shmoys|date=April 26, 2011|title=The Design of Approximation Algorithms|publisher=[[Cambridge University Press]]|isbn=978-0521195270}}&lt;br /&gt;
&lt;br /&gt;
==External links==&lt;br /&gt;
*Pierluigi Crescenzi, Viggo Kann, Magnús Halldórsson, [[Marek Karpinski]] and [[Gerhard J. Woeginger|Gerhard Woeginger]], [https://www.csc.kth.se/tcs/compendium/ &amp;#039;&amp;#039;A compendium of NP optimization problems&amp;#039;&amp;#039;].&lt;br /&gt;
&lt;br /&gt;
{{optimization algorithms|combinatorial|state=expanded}}&lt;br /&gt;
&lt;br /&gt;
{{Authority control}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Computational complexity theory]]&lt;br /&gt;
[[Category:Approximation algorithms| ]]&lt;/div&gt;</summary>
		<author><name>2001:638:708:306:290E:2B4C:E37E:90EA</name></author>
	</entry>
</feed>