<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://debianws.lexgopc.com/wiki143/index.php?action=history&amp;feed=atom&amp;title=Remez_algorithm</id>
	<title>Remez algorithm - Revision history</title>
	<link rel="self" type="application/atom+xml" href="http://debianws.lexgopc.com/wiki143/index.php?action=history&amp;feed=atom&amp;title=Remez_algorithm"/>
	<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Remez_algorithm&amp;action=history"/>
	<updated>2026-05-04T16:37:25Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>http://debianws.lexgopc.com/wiki143/index.php?title=Remez_algorithm&amp;diff=2775843&amp;oldid=prev</id>
		<title>imported&gt;OAbot: Open access bot: url-access updated in citation with #oabot.</title>
		<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Remez_algorithm&amp;diff=2775843&amp;oldid=prev"/>
		<updated>2025-06-19T22:07:10Z</updated>

		<summary type="html">&lt;p&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/OABOT&quot; class=&quot;extiw&quot; title=&quot;wikipedia:OABOT&quot;&gt;Open access bot&lt;/a&gt;: url-access updated in citation with #oabot.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{Short description|Algorithm to approximate functions}}&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;Remez algorithm&amp;#039;&amp;#039;&amp;#039; or &amp;#039;&amp;#039;&amp;#039;Remez exchange algorithm&amp;#039;&amp;#039;&amp;#039;, published by [[Evgeny Yakovlevich Remez]] in 1934, is an iterative algorithm used to find simple approximations to functions, specifically, approximations by functions in a [[Chebyshev space]] that are the best in the [[uniform norm]] &amp;#039;&amp;#039;L&amp;#039;&amp;#039;&amp;lt;sub&amp;gt;∞&amp;lt;/sub&amp;gt; sense.&amp;lt;ref&amp;gt;{{cite journal |author-link=Evgeny Yakovlevich Remez |first=E. Ya. |last=Remez |title=Sur la détermination des polynômes d&amp;#039;approximation de degré donnée |journal=Comm. Soc. Math. Kharkov |volume=10 |pages=41 |date=1934 }}&amp;lt;br/&amp;gt;{{cite journal |author-mask=1 |first=E. |last=Remes (Remez) |title=Sur un procédé convergent d&amp;#039;approximations successives pour déterminer les polynômes d&amp;#039;approximation |journal=Compt. Rend. Acad. Sci. |volume=198 |pages=2063–5 |language=fr |date=1934 |url=https://gallica.bnf.fr/ark:/12148/bpt6k31506/f2063.item}}&amp;lt;br/&amp;gt;{{cite journal |author-mask=1 |first=E. |last=Remes (Remez) |title=Sur le calcul effectif des polynomes d&amp;#039;approximation de Tschebyschef |journal=Compt. Rend. Acad. Sci. |volume=199 |issue= |pages=337–340 |language=fr |date=1934 |url=https://gallica.bnf.fr/ark:/12148/bpt6k3151h/f337.item}}&amp;lt;/ref&amp;gt; It is sometimes referred to as &amp;#039;&amp;#039;&amp;#039;Remes algorithm&amp;#039;&amp;#039;&amp;#039; or &amp;#039;&amp;#039;&amp;#039;Reme algorithm&amp;#039;&amp;#039;&amp;#039;.&amp;lt;ref&amp;gt;{{Cite journal |last=Chiang |first=Yi-Ling F. |date=November 1988 |title=A Modified Remes Algorithm |url=https://epubs.siam.org/doi/10.1137/0909072 |journal=SIAM Journal on Scientific and Statistical Computing |volume=9 |issue=6 |pages=1058–1072 |doi=10.1137/0909072 |issn=0196-5204|url-access=subscription }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A typical example of a Chebyshev space is the subspace of [[Chebyshev polynomials]] of order &amp;#039;&amp;#039;n&amp;#039;&amp;#039; in the [[Vector space|space]] of real [[continuous function]]s on an [[interval (mathematics)|interval]], &amp;#039;&amp;#039;C&amp;#039;&amp;#039;[&amp;#039;&amp;#039;a&amp;#039;&amp;#039;, &amp;#039;&amp;#039;b&amp;#039;&amp;#039;].  The polynomial of best approximation within a given subspace is defined to be the one that minimizes the maximum [[absolute difference]] between the polynomial and the function. In this case, the form of the solution is precised by the [[equioscillation theorem]].&lt;br /&gt;
&lt;br /&gt;
==Procedure==&lt;br /&gt;
The Remez algorithm starts with the function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; to be approximated and a set &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; of &amp;lt;math&amp;gt;n + 2&amp;lt;/math&amp;gt; sample points &amp;lt;math&amp;gt; x_1, x_2, ...,x_{n+2}&amp;lt;/math&amp;gt; in the approximation interval, usually the extrema of Chebyshev polynomial linearly mapped to the interval. The steps are:&lt;br /&gt;
&lt;br /&gt;
* Solve the linear system of equations&lt;br /&gt;
:&amp;lt;math&amp;gt; b_0 + b_1 x_i+ ... +b_n x_i ^ n + (-1)^ i E = f(x_i) &amp;lt;/math&amp;gt; (where &amp;lt;math&amp;gt; i=1, 2, ... n+2 &amp;lt;/math&amp;gt;),&lt;br /&gt;
:for the unknowns &amp;lt;math&amp;gt;b_0, b_1...b_n&amp;lt;/math&amp;gt; and &amp;#039;&amp;#039;E&amp;#039;&amp;#039;.&lt;br /&gt;
* Use the &amp;lt;math&amp;gt; b_i &amp;lt;/math&amp;gt; as coefficients to form a polynomial &amp;lt;math&amp;gt;P_n&amp;lt;/math&amp;gt;.&lt;br /&gt;
* Find the set &amp;lt;math&amp;gt;M&amp;lt;/math&amp;gt; of points of local maximum error &amp;lt;math&amp;gt;|P_n(x) - f(x)| &amp;lt;/math&amp;gt;.&lt;br /&gt;
* If the errors at every &amp;lt;math&amp;gt; m \in M &amp;lt;/math&amp;gt; are of equal magnitude and alternate in sign, then &amp;lt;math&amp;gt;P_n&amp;lt;/math&amp;gt; is the minimax approximation polynomial. If not, replace &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;M&amp;lt;/math&amp;gt; and repeat the steps above.&lt;br /&gt;
&lt;br /&gt;
The result is called the polynomial of best approximation or the [[minimax approximation algorithm]].&lt;br /&gt;
&lt;br /&gt;
A review of technicalities in implementing the Remez algorithm is given by W. Fraser.&amp;lt;ref&amp;gt;{{cite journal |doi=10.1145/321281.321282 |first=W. |last=Fraser |title=A Survey of Methods of Computing Minimax and Near-Minimax Polynomial Approximations for Functions of a Single Independent Variable |journal=J. ACM |volume=12 |pages=295–314 |year=1965 |issue=3 |s2cid=2736060 |doi-access=free }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Choice of initialization===&lt;br /&gt;
The Chebyshev nodes are a common choice for the initial approximation because of their role in the theory of polynomial interpolation. For the initialization of the optimization problem for function &amp;#039;&amp;#039;f&amp;#039;&amp;#039; by the Lagrange interpolant &amp;#039;&amp;#039;L&amp;#039;&amp;#039;&amp;lt;sub&amp;gt;n&amp;lt;/sub&amp;gt;(&amp;#039;&amp;#039;f&amp;#039;&amp;#039;), it can be shown that this initial approximation is bounded by&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\lVert f - L_n(f)\rVert_\infty \le (1 + \lVert L_n\rVert_\infty) \inf_{p \in P_n} \lVert f - p\rVert&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
with the norm or [[Lebesgue constant]] of the Lagrange interpolation operator &amp;#039;&amp;#039;L&amp;#039;&amp;#039;&amp;lt;sub&amp;gt;&amp;#039;&amp;#039;n&amp;#039;&amp;#039;&amp;lt;/sub&amp;gt; of the nodes (&amp;#039;&amp;#039;t&amp;#039;&amp;#039;&amp;lt;sub&amp;gt;1&amp;lt;/sub&amp;gt;, ..., &amp;#039;&amp;#039;t&amp;#039;&amp;#039;&amp;lt;sub&amp;gt;&amp;#039;&amp;#039;n&amp;#039;&amp;#039;&amp;amp;nbsp;+&amp;amp;nbsp;1&amp;lt;/sub&amp;gt;) being&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\lVert L_n\rVert_\infty = \overline{\Lambda}_n(T) = \max_{-1 \le x \le 1} \lambda_n(T; x),&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;T&amp;#039;&amp;#039; being the zeros of the Chebyshev polynomials, and the Lebesgue functions being&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\lambda_n(T; x) = \sum_{j = 1}^{n + 1} \left| l_j(x) \right|, \quad l_j(x) = \prod_{\stackrel{i = 1}{i \ne j}}^{n + 1} \frac{(x - t_i)}{(t_j - t_i)}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Theodore A. Kilgore,&amp;lt;ref&amp;gt;{{cite journal |doi=10.1016/0021-9045(78)90013-8 |first=T. A. |last=Kilgore |title=A characterization of the Lagrange interpolating projection with minimal Tchebycheff norm |journal=J. Approx. Theory |volume=24 |pages=273–288 |year=1978 |issue=4 |doi-access= }}&amp;lt;/ref&amp;gt; Carl de Boor, and Allan Pinkus&amp;lt;ref&amp;gt;{{cite journal |doi=10.1016/0021-9045(78)90014-X |first1=C. |last1=de Boor |first2=A. |last2=Pinkus |title=Proof of the conjectures of Bernstein and Erdös concerning the optimal nodes for polynomial interpolation |journal=[[Journal of Approximation Theory]] |volume=24 |pages=289–303 |year=1978 |issue=4 |doi-access=free }}&amp;lt;/ref&amp;gt; proved that there exists a unique &amp;#039;&amp;#039;t&amp;#039;&amp;#039;&amp;lt;sub&amp;gt;&amp;#039;&amp;#039;i&amp;#039;&amp;#039;&amp;lt;/sub&amp;gt; for each &amp;#039;&amp;#039;L&amp;#039;&amp;#039;&amp;lt;sub&amp;gt;&amp;#039;&amp;#039;n&amp;#039;&amp;#039;&amp;lt;/sub&amp;gt;, although not known explicitly for (ordinary) polynomials. Similarly, &amp;lt;math&amp;gt;\underline{\Lambda}_n(T) = \min_{-1 \le x \le 1} \lambda_n(T; x)&amp;lt;/math&amp;gt;, and the optimality of a choice of nodes can be expressed as &amp;lt;math&amp;gt;\overline{\Lambda}_n - \underline{\Lambda}_n \ge 0.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Chebyshev nodes, which provides a suboptimal, but analytically explicit choice, the asymptotic behavior is known as&amp;lt;ref&amp;gt;{{cite journal |first1=F. W. |last1=Luttmann |first2=T. J. |last2=Rivlin |title=Some numerical experiments in the theory of polynomial interpolation |journal=IBM J. Res. Dev. |volume=9 |pages=187–191 |year=1965 |issue=3 |doi= 10.1147/rd.93.0187}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\overline{\Lambda}_n(T) = \frac{2}{\pi} \log(n + 1) + \frac{2}{\pi}\left(\gamma + \log\frac{8}{\pi}\right) + \alpha_{n + 1}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
({{math|&amp;#039;&amp;#039;γ&amp;#039;&amp;#039;}} being the [[Euler–Mascheroni constant]]) with&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;0 &amp;lt; \alpha_n &amp;lt; \frac{\pi}{72 n^2}&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;n \ge 1,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and upper bound&amp;lt;ref&amp;gt;{{cite book |first=T.J. |last=Rivlin |chapter=The lebesgue constants for polynomial interpolation |chapter-url=https://link.springer.com/chapter/10.1007/BFb0063594 |doi=10.1007/BFb0063594 |series=Lecture Notes in Mathematics |volume=399 |editor-last=Garnir |editor-first=H.G. |editor2-last=Unni |editor2-first=K.R. |editor3-last=Williamson |editor3-first=J.H. |title=Functional Analysis and its Applications |publisher=Springer |date=1974 |isbn=978-3-540-37827-3 |pages=422–437 }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\overline{\Lambda}_n(T) \le \frac{2}{\pi} \log(n + 1) + 1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Lev Brutman&amp;lt;ref&amp;gt;{{cite journal |doi=10.1137/0715046 |first=L. |last=Brutman |title=On the Lebesgue Function for Polynomial Interpolation |journal=SIAM J. Numer. Anal. |volume=15 |pages=694–704 |year=1978 |issue=4 |bibcode=1978SJNA...15..694B }}&amp;lt;/ref&amp;gt; obtained the bound for &amp;lt;math&amp;gt;n \ge 3&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\hat{T}&amp;lt;/math&amp;gt; being the zeros of the expanded Chebyshev polynomials:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\overline{\Lambda}_n(\hat{T}) - \underline{\Lambda}_n(\hat{T}) &amp;lt; \overline{\Lambda}_3 - \frac{1}{6} \cot \frac{\pi}{8} + \frac{\pi}{64} \frac{1}{\sin^2(3\pi/16)} - \frac{2}{\pi}(\gamma - \log\pi)\approx 0.201.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Rüdiger Günttner&amp;lt;ref&amp;gt;{{cite journal |doi=10.1137/0717043 |first=R. |last=Günttner |title=Evaluation of Lebesgue Constants |journal=SIAM J. Numer. Anal. |volume=17 |pages=512–520 |year=1980 |issue=4 |bibcode=1980SJNA...17..512G }}&amp;lt;/ref&amp;gt; obtained from a sharper estimate for &amp;lt;math&amp;gt;n \ge 40&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\overline{\Lambda}_n(\hat{T}) - \underline{\Lambda}_n(\hat{T}) &amp;lt; 0.0196.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Detailed discussion==&lt;br /&gt;
This section provides more information on the steps outlined above. In this section, the index &amp;#039;&amp;#039;i&amp;#039;&amp;#039; runs from 0 to &amp;#039;&amp;#039;n&amp;#039;&amp;#039;+1.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Step 1:&amp;#039;&amp;#039;&amp;#039; Given &amp;lt;math&amp;gt;x_0, x_1, ... x_{n+1}&amp;lt;/math&amp;gt;, solve the linear system of &amp;#039;&amp;#039;n&amp;#039;&amp;#039;+2 equations&lt;br /&gt;
:&amp;lt;math&amp;gt; b_0 + b_1 x_i+ ... +b_n x_i ^ n + (-1) ^ i E = f(x_i) &amp;lt;/math&amp;gt; (where &amp;lt;math&amp;gt; i=0, 1, ... n+1 &amp;lt;/math&amp;gt;),&lt;br /&gt;
:for the unknowns &amp;lt;math&amp;gt;b_0, b_1, ...b_n&amp;lt;/math&amp;gt; and &amp;#039;&amp;#039;E&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
It should be clear that &amp;lt;math&amp;gt;(-1)^i E&amp;lt;/math&amp;gt; in this equation makes sense only if the nodes &amp;lt;math&amp;gt;x_0, ...,x_{n+1}&amp;lt;/math&amp;gt; are &amp;#039;&amp;#039;ordered&amp;#039;&amp;#039;, either strictly increasing or strictly decreasing. Then this linear system has a unique solution.  (As is well known, not every linear system has a solution.) Also, the solution can be obtained with only &amp;lt;math&amp;gt;O(n^2)&amp;lt;/math&amp;gt; arithmetic operations while a standard solver from the library would take &amp;lt;math&amp;gt;O(n^3)&amp;lt;/math&amp;gt; operations.  Here is the simple proof:&lt;br /&gt;
&lt;br /&gt;
Compute the standard &amp;#039;&amp;#039;n&amp;#039;&amp;#039;-th degree interpolant &amp;lt;math&amp;gt;p_1(x)&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;f(x)&amp;lt;/math&amp;gt; at the first &amp;#039;&amp;#039;n&amp;#039;&amp;#039;+1 nodes and also the standard &amp;#039;&amp;#039;n&amp;#039;&amp;#039;-th degree interpolant&lt;br /&gt;
&amp;lt;math&amp;gt;p_2(x)&amp;lt;/math&amp;gt; to the ordinates &amp;lt;math&amp;gt;(-1)^i&amp;lt;/math&amp;gt;&lt;br /&gt;
:&amp;lt;math&amp;gt;p_1(x_i) = f(x_i), p_2(x_i) = (-1)^i, i = 0, ..., n.&amp;lt;/math&amp;gt;&lt;br /&gt;
To this end, use each time Newton&amp;#039;s interpolation formula with the divided&lt;br /&gt;
differences of order &amp;lt;math&amp;gt;0, ...,n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;O(n^2)&amp;lt;/math&amp;gt; arithmetic operations.&lt;br /&gt;
&lt;br /&gt;
The polynomial &amp;lt;math&amp;gt;p_2(x)&amp;lt;/math&amp;gt; has its &amp;#039;&amp;#039;i&amp;#039;&amp;#039;-th zero between &amp;lt;math&amp;gt;x_{i-1}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x_i,\ i=1, ...,n&amp;lt;/math&amp;gt;, and thus no further zeroes between &amp;lt;math&amp;gt;x_n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x_{n+1}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;p_2(x_n)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;p_2(x_{n+1})&amp;lt;/math&amp;gt; have the same sign &amp;lt;math&amp;gt;(-1)^n&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The linear combination&lt;br /&gt;
&amp;lt;math&amp;gt;p(x) := p_1 (x) - p_2(x)\!\cdot\!E&amp;lt;/math&amp;gt; is also a polynomial of degree &amp;#039;&amp;#039;n&amp;#039;&amp;#039; and&lt;br /&gt;
:&amp;lt;math&amp;gt;p(x_i) = p_1(x_i) - p_2(x_i)\!\cdot\! E \ = \ f(x_i) - (-1)^i E,\ \ \ \  i =0, \ldots, n.&amp;lt;/math&amp;gt;&lt;br /&gt;
This is the same as the equation above for &amp;lt;math&amp;gt;i = 0, ... ,n&amp;lt;/math&amp;gt; and for any choice of &amp;#039;&amp;#039;E&amp;#039;&amp;#039;.&lt;br /&gt;
The same equation for &amp;#039;&amp;#039;i&amp;#039;&amp;#039; = &amp;#039;&amp;#039;n&amp;#039;&amp;#039;+1 is&lt;br /&gt;
:&amp;lt;math&amp;gt;p(x_{n+1}) \ = \ p_1(x_{n+1}) - p_2(x_{n+1})\!\cdot\!E \ = \ f(x_{n+1}) - (-1)^{n+1} E&amp;lt;/math&amp;gt; and needs special reasoning:  solved for the variable &amp;#039;&amp;#039;E&amp;#039;&amp;#039;, it is the &amp;#039;&amp;#039;definition&amp;#039;&amp;#039; of &amp;#039;&amp;#039;E&amp;#039;&amp;#039;:&lt;br /&gt;
:&amp;lt;math&amp;gt;E \ := \ \frac{p_1(x_{n+1}) - f(x_{n+1})}{p_2(x_{n+1}) + (-1)^n}.&amp;lt;/math&amp;gt;&lt;br /&gt;
As mentioned above, the two terms in the denominator have same sign:&lt;br /&gt;
&amp;#039;&amp;#039;E&amp;#039;&amp;#039; and thus &amp;lt;math&amp;gt;p(x) \equiv b_0 + b_1x + \ldots + b_nx^n&amp;lt;/math&amp;gt; are always well-defined.&lt;br /&gt;
&lt;br /&gt;
The error at the given &amp;#039;&amp;#039;n&amp;#039;&amp;#039;+2 ordered nodes is positive and negative in turn because&lt;br /&gt;
:&amp;lt;math&amp;gt;p(x_i) - f(x_i) \ = \ -(-1)^i E,\ \ i = 0, ... , n\!+\!1. &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The theorem of [[Charles Jean de la Vallée Poussin|de La Vallée Poussin]] states that under this condition no polynomial of degree &amp;#039;&amp;#039;n&amp;#039;&amp;#039; exists with error less than &amp;#039;&amp;#039;E&amp;#039;&amp;#039;. Indeed, if such a polynomial existed, call it &amp;lt;math&amp;gt;\tilde p(x)&amp;lt;/math&amp;gt;, then the difference&lt;br /&gt;
&amp;lt;math&amp;gt;p(x)-\tilde p(x) = (p(x) - f(x)) - (\tilde p(x) - f(x))&amp;lt;/math&amp;gt; would still be positive/negative at the &amp;#039;&amp;#039;n&amp;#039;&amp;#039;+2 nodes &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; and therefore have at least &amp;#039;&amp;#039;n&amp;#039;&amp;#039;+1 zeros which is impossible for a polynomial of degree &amp;#039;&amp;#039;n&amp;#039;&amp;#039;.&lt;br /&gt;
Thus, this &amp;#039;&amp;#039;E&amp;#039;&amp;#039; is a lower bound for the minimum error which can be achieved with polynomials of degree &amp;#039;&amp;#039;n&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Step 2&amp;#039;&amp;#039;&amp;#039; changes the notation from&lt;br /&gt;
&amp;lt;math&amp;gt;b_0 + b_1x + ... + b_nx^n&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;p(x)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Step 3&amp;#039;&amp;#039;&amp;#039; improves upon the input nodes &amp;lt;math&amp;gt;x_0, ..., x_{n+1}&amp;lt;/math&amp;gt; and their errors &amp;lt;math&amp;gt;\pm E&amp;lt;/math&amp;gt; as follows.&lt;br /&gt;
&lt;br /&gt;
In each P-region, the current node &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; is replaced with the local maximizer &amp;lt;math&amp;gt;\bar{x}_i&amp;lt;/math&amp;gt; and in each N-region &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; is replaced with the local minimizer.  (Expect &amp;lt;math&amp;gt;\bar{x}_0&amp;lt;/math&amp;gt; at &amp;#039;&amp;#039;A&amp;#039;&amp;#039;, the &amp;lt;math&amp;gt;\bar {x}_i&amp;lt;/math&amp;gt; near &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\bar{x}_{n+1}&amp;lt;/math&amp;gt; at &amp;#039;&amp;#039;B&amp;#039;&amp;#039;.) No high precision is required here,&lt;br /&gt;
the standard &amp;#039;&amp;#039;line search&amp;#039;&amp;#039; with a couple of &amp;#039;&amp;#039;quadratic fits&amp;#039;&amp;#039;  should suffice. (See &amp;lt;ref&amp;gt;{{cite book |last1=Luenberger |first1=D.G. |last2=Ye |first2=Y. |chapter=Basic Descent Methods |chapter-url=https://link.springer.com/chapter/10.1007/978-0-387-74503-9_8 |title=Linear and Nonlinear Programming |publisher=Springer |edition=3rd |series=International Series in Operations Research &amp;amp; Management Science |volume=116 |date=2008 |isbn=978-0-387-74503-9 |pages=215–262 |doi=10.1007/978-0-387-74503-9_8}}&amp;lt;/ref&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;z_i := p(\bar{x}_i) - f(\bar{x}_i)&amp;lt;/math&amp;gt;. Each amplitude &amp;lt;math&amp;gt;|z_i|&amp;lt;/math&amp;gt; is greater than or equal to &amp;#039;&amp;#039;E&amp;#039;&amp;#039;. The Theorem of &amp;#039;&amp;#039;de La Vallée Poussin&amp;#039;&amp;#039; and its proof also&lt;br /&gt;
apply to &amp;lt;math&amp;gt;z_0, ... ,z_{n+1}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\min\{|z_i|\} \geq E&amp;lt;/math&amp;gt; as the new&lt;br /&gt;
lower bound for the best error possible with polynomials of degree &amp;#039;&amp;#039;n&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
Moreover, &amp;lt;math&amp;gt;\max\{|z_i|\}&amp;lt;/math&amp;gt; comes in handy as an obvious upper bound for that best possible error.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Step 4:&amp;#039;&amp;#039;&amp;#039; With &amp;lt;math&amp;gt;\min\,\{|z_i|\}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\max\,\{|z_i|\}&amp;lt;/math&amp;gt; as lower and upper bound for the best possible approximation error, one has a reliable stopping criterion: repeat the steps until &amp;lt;math&amp;gt;\max\{|z_i|\} - \min\{|z_i|\}&amp;lt;/math&amp;gt; is sufficiently small or no longer decreases. These bounds indicate the progress.&lt;br /&gt;
&lt;br /&gt;
==Variants==&lt;br /&gt;
Some modifications of the algorithm are present on the literature.&amp;lt;ref&amp;gt;{{Citation |last1=Egidi |first1=Nadaniela |title=A New Remez-Type Algorithm for Best Polynomial Approximation |date=2020 |url=http://link.springer.com/10.1007/978-3-030-39081-5_7 |work=Numerical Computations: Theory and Algorithms |volume=11973 |pages=56–69 |editor-last=Sergeyev |editor-first=Yaroslav D. |place=Cham |publisher=Springer |doi=10.1007/978-3-030-39081-5_7 |isbn=978-3-030-39080-8 |last2=Fatone |first2=Lorella |last3=Misici |first3=Luciano |s2cid=211159177 |editor2-last=Kvasov |editor2-first=Dmitri E.|url-access=subscription }}&amp;lt;/ref&amp;gt; These include:&lt;br /&gt;
&lt;br /&gt;
* Replacing more than one sample point with the locations of nearby maximum absolute differences.{{Citation needed|date=March 2022}}&lt;br /&gt;
* Replacing all of the sample points with in a single iteration with the locations of all, alternating sign, maximum differences.&amp;lt;ref name=&amp;quot;toobs&amp;quot;&amp;gt;{{cite journal |last1=Temes |first1=G.C. |last2=Barcilon |first2=V. |last3=Marshall |first3=F.C. |title=The optimization of bandlimited systems |journal=Proceedings of the IEEE |volume=61 |issue=2 |pages=196–234 |date=1973 |doi=10.1109/PROC.1973.9004 |issn=0018-9219}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Using the relative error to measure the difference between the approximation and the function, especially if the approximation will be used to compute the function on a computer which uses [[floating point]] arithmetic;&lt;br /&gt;
* Including zero-error point constraints.&amp;lt;ref name=&amp;quot;toobs&amp;quot; /&amp;gt;&lt;br /&gt;
* The Fraser-Hart variant, used to determine the best rational Chebyshev approximation.&amp;lt;ref&amp;gt;{{Cite journal |last=Dunham |first=Charles B. |date=1975 |title=Convergence of the Fraser-Hart algorithm for rational Chebyshev approximation |url=https://www.ams.org/mcom/1975-29-132/S0025-5718-1975-0388732-9/ |journal=Mathematics of Computation |language=en |volume=29 |issue=132 |pages=1078–1082 |doi=10.1090/S0025-5718-1975-0388732-9 |issn=0025-5718|doi-access=free |url-access=subscription }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
{{Portal|Mathematics}}&lt;br /&gt;
* {{annotated link|Hadamard&amp;#039;s lemma}}&lt;br /&gt;
* {{annotated link|Laurent series}}&lt;br /&gt;
* {{annotated link|Padé approximant}}&lt;br /&gt;
* {{annotated link|Newton series}}&lt;br /&gt;
* {{annotated link|Approximation theory}}&lt;br /&gt;
* {{annotated link|Function approximation}}&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
{{Reflist}}&lt;br /&gt;
&lt;br /&gt;
==External links==&lt;br /&gt;
*[https://www.boost.org/doc/libs/1_47_0/libs/math/doc/sf_and_dist/html/math_toolkit/toolkit/internals2/minimax.html Minimax Approximations and the Remez Algorithm], background chapter in the [[Boost (C++ libraries)|Boost]] Math Tools documentation, with link to an implementation in C++&lt;br /&gt;
*[http://www.bores.com/courses/intro/filters/4_equi.htm Intro to DSP]&lt;br /&gt;
*{{MathWorld|urlname=RemezAlgorithm|title=Remez Algorithm|author1-link=Ronald Aarts|author=Aarts, Ronald M.|author2=Bond, Charles|author3=Mendelsohn, Phil|author4= Weisstein, Eric W.|name-list-style=amp}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Polynomials]]&lt;br /&gt;
[[Category:Approximation theory]]&lt;br /&gt;
[[Category:Numerical analysis]]&lt;/div&gt;</summary>
		<author><name>imported&gt;OAbot</name></author>
	</entry>
</feed>