Numerical differentiation: Difference between revisions

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
 
Change highlight color to use theme color so it works with dark theme.
 
Line 1: Line 1:
{{Short description|Use of numerical analysis to estimate derivatives of functions}}
{{Short description|Use of numerical analysis to estimate derivatives of functions}}
[[File:Derivative.svg|thumb|[[Finite difference]] estimation of derivative|230px|right]]
[[File:Derivative.svg|thumb|[[Finite difference]] estimation of derivative|230px|right]]
In [[numerical analysis]], '''numerical differentiation''' [[algorithm]]s estimate the [[derivative]] of a [[Function (mathematics)|mathematical function]] or [[Function (computer programming)|subroutine]] using values of the function and perhaps other knowledge about the function.
In [[numerical analysis]], '''numerical differentiation''' [[algorithm]]s estimate the [[derivative]] of a [[Function (mathematics)|mathematical function]] or [[Function (computer programming)|subroutine]] using values of the function. Unlike analytical differentiation, which provides exact expressions for derivatives, numerical differentiation relies on the function's values at a set of discrete points to estimate the derivative's value at those points or at intermediate points. This approach is particularly useful when dealing with data obtained from experiments, simulations, or situations where the function is defined only at specific intervals.


==Finite differences==
==Finite differences==
Line 12: Line 12:
This expression is [[Isaac Newton|Newton]]'s [[difference quotient]] (also known as a first-order [[divided difference]]).
This expression is [[Isaac Newton|Newton]]'s [[difference quotient]] (also known as a first-order [[divided difference]]).


The slope of this secant line differs from the slope of the tangent line by an amount that is approximately proportional to {{mvar|h}}. As {{mvar|h}} approaches zero, the slope of the secant line approaches the slope of the tangent line. Therefore, the true '''derivative of {{math|''f''}} at {{mvar|x}}''' is the limit of the value of the difference quotient as the secant lines get closer and closer to being a tangent line:
To obtain an error estimate for this approximation, one can use Taylor expansion of <math>f(x)</math> about the base point <math>x</math> to give
<math display="block">
f(x+h) = f(x) + hf'(x) + \frac{h^2}{2}f''(c)
</math>
for some <math>c</math> between <math>x</math> and <math>x+h</math>. Rearranging gives
<math display="block">
f'(x) = \underbrace{\frac{f(x+h) - f(x)}{h}}_{\text{Slope of secant line}} - \underbrace{\frac{h}{2}f''(c)}_{\text{Error term}}.
</math>
The slope of this secant line differs from the slope of the tangent line by an amount that is approximately proportional to {{mvar|h}}. As {{mvar|h}} approaches zero, the slope of the secant line approaches the slope of the tangent line and the error term vanishes. Therefore, the true '''derivative of {{math|''f''}} at {{mvar|x}}''' is the limit of the value of the difference quotient as the secant lines get closer and closer to being a tangent line:
<math display="block">f'(x) = \lim_{h \to 0} \frac{f(x + h) - f(x)}{h}.</math>
<math display="block">f'(x) = \lim_{h \to 0} \frac{f(x + h) - f(x)}{h}.</math>


Line 35: Line 43:
[[Image:AbsoluteErrorNumericalDifferentiationExample.png|thumb|300px|Example showing the difficulty of choosing {{mvar|h}} due to both rounding error and formula error]]
[[Image:AbsoluteErrorNumericalDifferentiationExample.png|thumb|300px|Example showing the difficulty of choosing {{mvar|h}} due to both rounding error and formula error]]


An important consideration in practice when the function is calculated using [[floating-point arithmetic]] of finite precision is the choice of step size, {{mvar|h}}. If chosen too small, the subtraction will yield a large [[rounding error]]. In fact, all the finite-difference formulae are [[ill-conditioned]]<ref name=Fornberg1>Numerical Differentiation of Analytic Functions, B Fornberg – ACM Transactions on Mathematical Software (TOMS), 1981.</ref> and due to cancellation will produce a value of zero if {{mvar|h}} is small enough.<ref name=SquireTrapp1>Using Complex Variables to Estimate Derivatives of Real Functions, W. Squire, G. Trapp – SIAM REVIEW, 1998.</ref> If too large, the calculation of the slope of the secant line will be more accurately calculated, but the estimate of the slope of the tangent by using the secant could be worse.<ref name="edo2021">{{Cite book|url= http://flowlab.groups.et.byu.net/mdobook.pdf | title=Engineering Design Optimization | last1=Martins|first1=Joaquim R. R. A. | last2=Ning|first2=Andrew | date=2021-10-01 | publisher=Cambridge University Press | isbn=978-1108833417|language=en}}</ref>
An important consideration in practice when the function is calculated using [[floating-point arithmetic]] of finite precision is the choice of step size, {{mvar|h}}. To illustrate, consider the two-point approximation formula with error term:
 
<math display="block">
f'(x) = \frac{f(x+h) - f(x-h)}{2h} - \frac{f^{(3)}(c)}{6}h^2
</math>
 
where <math>c</math> is some point between <math>x - h</math> and <math>x + h</math>. Let <math>e(x)</math> denote the roundoff error encountered when evaluating the function <math>f(x)</math> and <math>\hat{f}(x)</math> denote the computed value of <math>f(x)</math>. Therefore,
<math display="block">
f(x) = \hat{f}(x) + e(x).
</math>
The total error in the approximation is
<math display="block">
f'(x) - \frac{\hat{f}(x + h) - \hat{f}(x-h)}{2h} = \underbrace{\frac{e(x + h) - e(x-h)}{2h}}_{\text{Roundoff error}} - \underbrace{\frac{f^{(3)}(c)}{6}h^2}_{\text{Truncation error}}.
</math>
Assuming that the roundoff errors are bounded by some number <math>\epsilon > 0</math> and the [[third derivative]] of <math>f(x)</math> is bounded by some number <math>M > 0</math>, we get
<math display="block">
\left| f'(x) - \frac{\hat{f}(x + h) - \hat{f}(x-h)}{2h} \right| \le \frac{\epsilon}{h} + \frac{h^2}{6}M.
</math>
 
To reduce the truncation error <math>\frac{h^2}{6}M</math>, we must reduce {{mvar|h}}. But as {{mvar|h}} is reduced, the roundoff error <math>\frac{\epsilon}{h}</math> increases. Due to the need to divide by the small value {{mvar|h}}, all the finite-difference formulae for numerical differentiation are similarly [[ill-conditioned]].<ref name=Fornberg1>Numerical Differentiation of Analytic Functions, B Fornberg – ACM Transactions on Mathematical Software (TOMS), 1981.</ref>


To demonstrate this difficulty, consider approximating the derivative of the function
<math display="block">
f(x) = \frac{2x}{1 + \sqrt{x}}
</math>
at the point <math>x_{0} = 9</math>. In this case, we can calculate
<math display="block">
f'(x) = \frac{2 + \sqrt{x}}{(1 + \sqrt{x})^{2}}
</math>
which gives
<math display="block">
f'(9) = \frac{2 + \sqrt{9}}{(1 + \sqrt{9})^{2}} = \frac{2 + 3}{(1 + 3)^{2}} = \frac{5}{16} = 0.3125.
</math>
Using 64-bit floating point numbers, the following approximations are generated with the two-point approximation formula and increasingly smaller step sizes. The smallest absolute error is produced for a step size of <math>10^{-4}</math>, after which the absolute error steadily increases as the roundoff errors dominate calcuations.
{| class="wikitable"
|-
! Step Size (h) !! Approximation !! Absolute Error
|-
| <math>10^{-1}</math> || <math>0.31250397877426117</math> || <math>3.97877 \times 10^{-6}</math>
|-
| <math>10^{-2}</math> || <math>0.31250003978589014</math> || <math>3.97858 \times 10^{-8}</math>
|-
| <math>10^{-3}</math>|| <math>0.3125000003976197</math> || <math>3.97619 \times 10^{-10}</math>
|- style="background: var(--background-color-warning-subtle,#fdf2d5);"
| <math>10^{-4}</math>|| <math>0.31250000000593303</math> || <math>5.93303 \times 10^{-12}</math>
|-
| <math>10^{-5}</math>|| <math>0.3124999999215561</math> || <math>7.84439 \times 10^{-11}</math>
|-
| <math>10^{-6}</math>|| <math>0.31249999965510256</math> || <math>3.44897 \times 10^{-10}</math>
|-
| <math>10^{-7}</math> || <math>0.31249999921101335</math> || <math>7.88986 \times 10^{-10}</math>
|-
| <math>10^{-8}</math> || <math>0.31250002585636594</math> || <math>2.58563 \times 10^{-8}</math>
|-
| <math>10^{-9}</math> || <math>0.31250024790097086</math> || <math>2.47900 \times 10^{-7}</math>
|-
| <math>10^{-10}</math> || <math>0.31249669518729206</math> || <math>3.30481 \times 10^{-6}</math>
|-
| <math>10^{-11}</math> || <math>0.31246116805050406</math> || <math>3.88319 \times 10^{-5}</math>
|-
| <math>10^{-12}</math> || <math>0.312194714524594</math> || <math>3.05285 \times 10^{-4}</math>
|-
| <math>10^{-13}</math> || <math>0.31530333899354446</math> || <math>2.80333 \times 10^{-3}</math>
|-
| <math>10^{-14}</math> || <math>3.552713678800501</math> || <math>4.27713 \times 10^{-2}</math>
|}
For basic central differences, the optimal step is the cube-root of [[machine epsilon]].<ref>Sauer, Timothy (2012). ''Numerical Analysis''. Pearson. p.248.</ref>
For basic central differences, the optimal step is the cube-root of [[machine epsilon]].<ref>Sauer, Timothy (2012). ''Numerical Analysis''. Pearson. p.248.</ref>
For the numerical derivative formula evaluated at {{mvar|x}} and {{math|''x'' + ''h''}}, a choice for {{mvar|h}} that is small without producing a large rounding error is <math>\sqrt{\varepsilon} x</math> (though not when ''x'' = 0), where the [[machine epsilon]] {{mvar|ε}} is typically of the order of {{val|2.2e−16}} for [[double precision]].<ref>Following ''[[Numerical Recipes]] in C'', [http://www.nrbook.com/a/bookcpdf/c5-7.pdf Chapter 5.7].</ref> A formula for {{mvar|h}} that balances the rounding error against the secant error for optimum accuracy is<ref>[http://www.uio.no/studier/emner/matnat/math/MAT-INF1100/h10/kompendiet/kap11.pdf p. 263].</ref>
For the numerical derivative formula evaluated at {{mvar|x}} and {{math|''x'' + ''h''}}, a choice for {{mvar|h}} that is small without producing a large rounding error is <math>\sqrt{\varepsilon} x</math> (though not when ''x'' = 0), where the [[machine epsilon]] {{mvar|ε}} is typically of the order of {{val|2.2e−16}} for [[double precision]].<ref>Following ''[[Numerical Recipes]] in C'', [http://www.nrbook.com/a/bookcpdf/c5-7.pdf Chapter 5.7].</ref> A formula for {{mvar|h}} that balances the rounding error against the secant error for optimum accuracy is<ref>[http://www.uio.no/studier/emner/matnat/math/MAT-INF1100/h10/kompendiet/kap11.pdf p. 263].</ref>
Line 49: Line 121:
However, with computers, [[compiler optimization]] facilities may fail to attend to the details of actual computer arithmetic and instead apply the axioms of mathematics to deduce that {{math|''dx''}} and {{mvar|h}} are the same. With [[C (programming language)|C]] and similar languages, a directive that {{math|''xph''}} is a [[volatile variable]] will prevent this.
However, with computers, [[compiler optimization]] facilities may fail to attend to the details of actual computer arithmetic and instead apply the axioms of mathematics to deduce that {{math|''dx''}} and {{mvar|h}} are the same. With [[C (programming language)|C]] and similar languages, a directive that {{math|''xph''}} is a [[volatile variable]] will prevent this.


==Other methods==
==Three Point methods==
===Higher-order methods===
{{further|Finite difference coefficient}}
{{further|Finite difference coefficient}}
To obtain more general derivative approximation formulas for some function <math>f(x)</math>, let <math>h>0</math> be a positive number close to zero. The Taylor expansion of <math>f(x)</math> about the base point <math>x</math> is
To obtain more general derivative approximation formulas for some function <math>f(x)</math>, let <math>h>0</math> be a positive number close to zero. The Taylor expansion of <math>f(x)</math> about the base point <math>x</math> is
Line 96: Line 167:
{| class=wikitable style="border: none;"
{| class=wikitable style="border: none;"
! scope=col | Formula
! scope=col | Formula
! scope=col | h
! scope=col | Step Size (h)
! scope=col | Approximation
! scope=col | Approximation
! scope=col | Error
! scope=col | Absolute Error
|-
|-
| rowspan=4 | Three-point forward difference formula || 0.1 || 1.2719084899816118 || 9.441 x 10<sup>-3</sup>
| rowspan=4 | Three-point forward difference formula || <math>10^{-1}</math> || <math>1.2719084899816118</math> || <math>9.441 \times 10^{-3}</math>
|-
|-
| 0.01 || 1.2625569346253918 || 8.978 x 10<sup>-5</sup>
| <math>10^{-2}</math> || <math>1.2625569346253918</math> || <math>8.978 \times 10^{-5}</math>
|-
|-
| 0.001 || 1.2624680412510747 || 8.927 x 10<sup>-7</sup>
| <math>10^{-3}</math> || <math>1.2624680412510747</math> || <math>8.927 \times 10^{-7}</math>
|-
|-
| 0.0001 || 1.2624671573796542 || 8.923 x 10<sup>-9</sup>
| <math>10^{-4}</math> || <math>1.2624671573796542</math> || <math>8.923 \times 10^{-9}</math>
|-
|-
| rowspan=4 | Three-point backward difference formula || 0.1 || 1.2719084899816118 || 8.307 x 10<sup>-3</sup>
| rowspan=4 | Three-point backward difference formula || <math>10^{-1}</math> || <math>1.2719084899816118</math> || <math>8.307 \times 10^{-3}</math>
|-
|-
| 0.01 || 1.2625569346253918 || 8.864 x 10<sup>-5</sup>
| <math>10^{-2}</math> || <math>1.2625569346253918</math> || <math>8.864 \times 10^{-5}</math>
|-
|-
| 0.001 || 1.2624680412510747 || 8.916 x 10<sup>-7</sup>
| <math>10^{-3}</math> || <math>1.2624680412510747</math> || <math>8.916 \times 10^{-7}</math>
|-
|-
| 0.0001 || 1.2624671573796542 || 8.922 x 10<sup>-9</sup>
| <math>10^{-4}</math> || <math>1.2624671573796542</math> || <math>8.922 \times 10^{-9}</math>
|-
|-
| rowspan=4 | Three-point central difference formula || 0.1 || 1.2719084899816118 || 4.457 x 10<sup>-3</sup>
| rowspan=4 | Three-point central difference formula || <math>10^{-1}</math> || <math>1.2719084899816118</math> || <math>4.457 \times 10^{-3}</math>
|-
|-
| 0.01 || 1.2625569346253918 || 4.461 x 10<sup>-5</sup>
| <math>10^{-2}</math> || <math>1.2625569346253918</math> || <math>4.461 \times 10^{-5}</math>
|-
|-
| 0.001 || 1.2624680412510747 || 4.461 x 10<sup>-7</sup>
| <math>10^{-3}</math> || <math>1.2624680412510747</math> || <math>4.461 \times 10^{-7}</math>
|-
|-
| 0.0001 || 1.2624671573796542 || 4.461 x 10<sup>-9</sup>
| <math>10^{-4}</math> || <math>1.2624671573796542</math> || <math>4.461 \times 10^{-9}</math>
|}
|}


Line 199: Line 270:
|}
|}


=== Higher derivatives ===
== Higher derivatives ==
Using Taylor Series, it is possible to derive formulas to approximate the second (and higher order) derivatives of a general function. For a function <math>f(x)</math> and some number <math>h>0</math>, expanding it about <math>x+h</math> and <math>x-h</math> gives
 
<math display="block">
f(x + h) = f(x) + f'(x)h + \frac{1}{2}f''(x)h^{2} + \frac{1}{6}f'''(x)h^{3} + \frac{1}{24}f^{(4)}(\xi_{1})h^{4}
</math>
and
<math display="block">
f(x - h) = f(x) - f'(x)h + \frac{1}{2}f''(x)h^{2} - \frac{1}{6}f'''(x)h^{3} + \frac{1}{24}f^{(4)}(\xi_{2})h^{4}
</math>
where <math>x - h < \xi_{2} < x < \xi_{1} < x + h</math>. Adding these two equations gives
 
<math display="block">
f(x + h) + f(x - h) = 2f(x) + f''(x)h^{2} + \frac{1}{24} \left[ f^{(4)}(\xi_{1}) + f^{(4)}(\xi_{2})\right]h^{4}.
</math>
 
If <math>f^{(4)}</math> is continuous on <math>[x-h, x+h]</math>, then <math>\frac{1}{2}\left[ f^{(4)}(\xi_{1}) + f^{(4)}(\xi_{2})\right]</math> is between <math>f^{(4)}(\xi_{1})</math> and <math>f^{(4)}(\xi_{2})</math>. The [[Intermediate Value Theorem]] guarantees a number, say <math>\xi</math>, between <math>\xi_{1}</math> and <math>\xi_{2}</math>. We can therefore write
 
<math display="block">
f''(x) = \frac{1}{h^{2}} \left[f(x - h) - 2f(x) + f(x+h) \right] - \frac{h^2}{12}f^{(4)}(\xi).
</math>
 
where <math>x - h < \xi < x + h</math>.
 
=== Numerical Example ===
Consider approximating the second derivative of the function <math>f(x) = e^{x} \sin{x}</math> at the point <math>x_{0} = \frac{\pi}{4}</math>.
 
Since <math>f''(x) = 2e^{x} \cos{x}</math>, the exact value is <math>f''(\frac{\pi}{4}) = \sqrt{2}e^{\frac{\pi}{4}} \approx 3.1017663</math>.
 
{| class=wikitable style="border: none;"
! scope=col | Step Size (h)
! scope=col | Approximation
! scope=col | Absolute Error
|-
| <math>10^{-1}</math> || <math>3.096593338003672</math> || <math>5.17305 \times 10^{-3}</math>
|-
| <math>10^{-2}</math> || <math>3.1017146973844056</math> || <math>5.16964 \times 10^{-5}</math>
|-
| <math>10^{-3}</math> || <math>3.1017658768117684</math> || <math>5.17024 \times 10^{-7}</math>
|-
| <math>10^{-4}</math> || <math>3.1017663770782633</math> || <math>1.67577 \times 10^{-8}</math>
|}
 
=== Arbitrary Derivatives ===
Using Newton's difference quotient,
Using Newton's difference quotient,
<math display="block">f'(x) = \lim_{h \to 0} \frac{f(x + h) - f(x)}{h}</math>
<math display="block">f'(x) = \lim_{h \to 0} \frac{f(x + h) - f(x)}{h}</math>
Line 211: Line 325:
==Complex-variable methods==
==Complex-variable methods==


The classical finite-difference approximations for numerical differentiation are ill-conditioned. However, if <math>f</math> is a [[holomorphic function]], real-valued on the real line, which can be evaluated at points in the complex plane near <math>x</math>, then there are [[Numerical stability|stable]] methods. For example,<ref name=SquireTrapp1/> the first derivative can be calculated by the complex-step derivative formula:<ref>{{cite journal | last1 = Martins | first1 = J. R. R. A. | first2 = P. | last2 = Sturdza | first3 = J. J. | last3 = Alonso | year = 2003 | citeseerx=10.1.1.141.8002 | title = The Complex-Step Derivative Approximation | journal = ACM Transactions on Mathematical Software | volume = 29 | issue = 3 | pages = 245–262 | doi=10.1145/838250.838251| s2cid = 7022422 }}</ref><ref>[https://sinews.siam.org/Details-Page/differentiation-without-a-difference Differentiation With(out) a Difference] by [[Nicholas Higham]] </ref><ref>[https://blogs.mathworks.com/cleve/2013/10/14/complex-step-differentiation/ article] from [[MathWorks]] blog, posted by [[Cleve Moler]]</ref>
The classical finite-difference approximations for numerical differentiation are ill-conditioned. However, if <math>f</math> is a [[holomorphic function]], real-valued on the real line, which can be evaluated at points in the [[complex plane]] near <math>x</math>, then there are [[Numerical stability|stable]] methods. For example,<ref name=SquireTrapp1>Using Complex Variables to Estimate Derivatives of Real Functions, W. Squire, G. Trapp – SIAM REVIEW, 1998.</ref> the first derivative can be calculated by the complex-step derivative formula:<ref>{{cite journal | last1 = Martins | first1 = J. R. R. A. | first2 = P. | last2 = Sturdza | first3 = J. J. | last3 = Alonso | year = 2003 | citeseerx=10.1.1.141.8002 | title = The Complex-Step Derivative Approximation | journal = ACM Transactions on Mathematical Software | volume = 29 | issue = 3 | pages = 245–262 | doi=10.1145/838250.838251| s2cid = 7022422 }}</ref><ref>[https://sinews.siam.org/Details-Page/differentiation-without-a-difference Differentiation With(out) a Difference] by [[Nicholas Higham]]</ref><ref>[https://blogs.mathworks.com/cleve/2013/10/14/complex-step-differentiation/ article] from [[MathWorks]] blog, posted by [[Cleve Moler]]</ref>
<math display="block">f'(x) = \frac{\Im(f(x + \mathrm{i}h))}{h} + O(h^2), \quad \mathrm{i^2}:=-1.</math>
<math display="block">f'(x) = \frac{\Im(f(x + \mathrm{i}h))}{h} + O(h^2), \quad \mathrm{i^2}:=-1.</math>


The recommended step size to obtain accurate derivatives for a range of conditions is <math>h = 10^{-200}</math>.<ref name="edo2021"/>
The recommended step size to obtain accurate derivatives for a range of conditions is <math>h = 10^{-200}</math>.<ref name="edo2021">{{Cite book|url= http://flowlab.groups.et.byu.net/mdobook.pdf | title=Engineering Design Optimization | last1=Martins|first1=Joaquim R. R. A. | last2=Ning|first2=Andrew | date=2021-10-01 | publisher=Cambridge University Press | isbn=978-1108833417|language=en}}</ref>
This formula can be obtained by [[Taylor series]] expansion:
This formula can be obtained by [[Taylor series]] expansion:
<math display="block">f(x+\mathrm{i}h) = f(x) + \mathrm{i}h f'(x) - \tfrac{1}{2!} h^2 f''(x) - \tfrac{\mathrm{i}}{3!} h^3 f^{(3)}(x) + \cdots.</math>
<math display="block">f(x+\mathrm{i}h) = f(x) + \mathrm{i}h f'(x) - \tfrac{1}{2!} h^2 f''(x) - \tfrac{\mathrm{i}}{3!} h^3 f^{(3)}(x) + \cdots.</math>

Latest revision as of 14:49, 31 December 2025

Template:Short description

File:Derivative.svg
Finite difference estimation of derivative

In numerical analysis, numerical differentiation algorithms estimate the derivative of a mathematical function or subroutine using values of the function. Unlike analytical differentiation, which provides exact expressions for derivatives, numerical differentiation relies on the function's values at a set of discrete points to estimate the derivative's value at those points or at intermediate points. This approach is particularly useful when dealing with data obtained from experiments, simulations, or situations where the function is defined only at specific intervals.

Finite differences

Script error: No such module "labelled list hatnote".

The simplest method is to use finite difference approximations.

A simple two-point estimation is to compute the slope of a nearby secant line through the points (x, f(x))Script error: No such module "Check for unknown parameters". and (x + h, f(x + h))Script error: No such module "Check for unknown parameters"..[1] Choosing a small number Template:Mvar, Template:Mvar represents a small change in Template:Mvar, and it can be either positive or negative. The slope of this line is f(x+h)f(x)h. This expression is Newton's difference quotient (also known as a first-order divided difference).

To obtain an error estimate for this approximation, one can use Taylor expansion of f(x) about the base point x to give f(x+h)=f(x)+hf(x)+h22f(c) for some c between x and x+h. Rearranging gives f(x)=f(x+h)f(x)hSlope of secant lineh2f(c)Error term. The slope of this secant line differs from the slope of the tangent line by an amount that is approximately proportional to Template:Mvar. As Template:Mvar approaches zero, the slope of the secant line approaches the slope of the tangent line and the error term vanishes. Therefore, the true derivative of fScript error: No such module "Check for unknown parameters". at Template:Mvar is the limit of the value of the difference quotient as the secant lines get closer and closer to being a tangent line: f(x)=limh0f(x+h)f(x)h.

Since immediately substituting 0 for Template:Mvar results in 00 indeterminate form, calculating the derivative directly can be unintuitive.

Equivalently, the slope could be estimated by employing positions x − hScript error: No such module "Check for unknown parameters". and Template:Mvar.

Another two-point formula is to compute the slope of a nearby secant line through the points (x − h, f(x − h))Script error: No such module "Check for unknown parameters". and (x + h, f(x + h))Script error: No such module "Check for unknown parameters".. The slope of this line is f(x+h)f(xh)2h.

This formula is known as the symmetric difference quotient. In this case the first-order errors cancel, so the slope of these secant lines differ from the slope of the tangent line by an amount that is approximately proportional to h2. Hence for small values of Template:Mvar this is a more accurate approximation to the tangent line than the one-sided estimation. However, although the slope is being computed at Template:Mvar, the value of the function at Template:Mvar is not involved.

The estimation error is given by R=f(3)(c)6h2, where c is some point between xh and x+h. This error does not include the rounding error due to numbers being represented and calculations being performed in limited precision.

The symmetric difference quotient is employed as the method of approximating the derivative in a number of calculators, including TI-82, TI-83, TI-84, TI-85, all of which use this method with h = 0.001Script error: No such module "Check for unknown parameters"..[2][3]

Step size

Script error: No such module "Labelled list hatnote".

File:AbsoluteErrorNumericalDifferentiationExample.png
Example showing the difficulty of choosing Template:Mvar due to both rounding error and formula error

An important consideration in practice when the function is calculated using floating-point arithmetic of finite precision is the choice of step size, Template:Mvar. To illustrate, consider the two-point approximation formula with error term:

f(x)=f(x+h)f(xh)2hf(3)(c)6h2

where c is some point between xh and x+h. Let e(x) denote the roundoff error encountered when evaluating the function f(x) and f^(x) denote the computed value of f(x). Therefore, f(x)=f^(x)+e(x). The total error in the approximation is f(x)f^(x+h)f^(xh)2h=e(x+h)e(xh)2hRoundoff errorf(3)(c)6h2Truncation error. Assuming that the roundoff errors are bounded by some number ϵ>0 and the third derivative of f(x) is bounded by some number M>0, we get |f(x)f^(x+h)f^(xh)2h|ϵh+h26M.

To reduce the truncation error h26M, we must reduce Template:Mvar. But as Template:Mvar is reduced, the roundoff error ϵh increases. Due to the need to divide by the small value Template:Mvar, all the finite-difference formulae for numerical differentiation are similarly ill-conditioned.[4]

To demonstrate this difficulty, consider approximating the derivative of the function f(x)=2x1+x at the point x0=9. In this case, we can calculate f(x)=2+x(1+x)2 which gives f(9)=2+9(1+9)2=2+3(1+3)2=516=0.3125. Using 64-bit floating point numbers, the following approximations are generated with the two-point approximation formula and increasingly smaller step sizes. The smallest absolute error is produced for a step size of 104, after which the absolute error steadily increases as the roundoff errors dominate calcuations.

Step Size (h) Approximation Absolute Error
101 0.31250397877426117 3.97877×106
102 0.31250003978589014 3.97858×108
103 0.3125000003976197 3.97619×1010
104 0.31250000000593303 5.93303×1012
105 0.3124999999215561 7.84439×1011
106 0.31249999965510256 3.44897×1010
107 0.31249999921101335 7.88986×1010
108 0.31250002585636594 2.58563×108
109 0.31250024790097086 2.47900×107
1010 0.31249669518729206 3.30481×106
1011 0.31246116805050406 3.88319×105
1012 0.312194714524594 3.05285×104
1013 0.31530333899354446 2.80333×103
1014 3.552713678800501 4.27713×102

For basic central differences, the optimal step is the cube-root of machine epsilon.[5] For the numerical derivative formula evaluated at Template:Mvar and x + hScript error: No such module "Check for unknown parameters"., a choice for Template:Mvar that is small without producing a large rounding error is εx (though not when x = 0), where the machine epsilon Template:Mvar is typically of the order of Script error: No such module "val". for double precision.[6] A formula for Template:Mvar that balances the rounding error against the secant error for optimum accuracy is[7] h=2ε|f(x)f(x)| (though not when f(x)=0), and to employ it will require knowledge of the function.

For computer calculations the problems are exacerbated because, although Template:Mvar necessarily holds a representable floating-point number in some precision (32 or 64-bit, etc.), x + hScript error: No such module "Check for unknown parameters". almost certainly will not be exactly representable in that precision. This means that x + hScript error: No such module "Check for unknown parameters". will be changed (by rounding or truncation) to a nearby machine-representable number, with the consequence that (x + h) − xScript error: No such module "Check for unknown parameters". will not equal Template:Mvar; the two function evaluations will not be exactly Template:Mvar apart. In this regard, since most decimal fractions are recurring sequences in binary (just as 1/3 is in decimal) a seemingly round step such as h = 0.1Script error: No such module "Check for unknown parameters". will not be a round number in binary; it is 0.000110011001100...2 A possible approach is as follows:

h     := sqrt(eps) * x;
xph   := x + h;
dx    := xph - x;
slope := (F(xph) - F(x)) / dx;

However, with computers, compiler optimization facilities may fail to attend to the details of actual computer arithmetic and instead apply the axioms of mathematics to deduce that dxScript error: No such module "Check for unknown parameters". and Template:Mvar are the same. With C and similar languages, a directive that xphScript error: No such module "Check for unknown parameters". is a volatile variable will prevent this.

Three Point methods

Script error: No such module "labelled list hatnote". To obtain more general derivative approximation formulas for some function f(x), let h>0 be a positive number close to zero. The Taylor expansion of f(x) about the base point x is Template:NumBlk

Replacing h by 2h gives Template:NumBlk

Multiplying identity (1) by 4 gives Template:NumBlk

Subtracting identity (1') from (2) eliminates the h2 term:

f(x+2h)4f(x+h)=3f(x)2hf(x)+4h33!f(x)+...

which can be written as

f(x+2h)4f(x+h)=3f(x)2hf(x)+O(h3).

Rearranging terms gives f(x)=3f(x)+4f(x+h)f(x+2h)2h+O(h2),

which is called the three-point forward difference formula for the derivative. Using a similar approach, one can show

f(x)=f(x+h)f(xh)2h+O(h2) which is called the three-point central difference formula, and f(x)=f(x2h)4f(xh)+3f(x)2h+O(h2) which is called the three-point backward difference formula.

By a similar approach, the five point midpoint approximation formula can be derived as:[8] f(x)=f(x+2h)+8f(x+h)8f(xh)+f(x2h)12h+O(h4).

Numerical Example

Consider approximating the derivative of f(x)=xsinx at the point x0=π4. Since f(x)=sinx+xcosx, the exact value is f(π4)=sinπ4+π4cosπ4=12+π421.2624671484563432.

Formula Step Size (h) Approximation Absolute Error
Three-point forward difference formula 101 1.2719084899816118 9.441×103
102 1.2625569346253918 8.978×105
103 1.2624680412510747 8.927×107
104 1.2624671573796542 8.923×109
Three-point backward difference formula 101 1.2719084899816118 8.307×103
102 1.2625569346253918 8.864×105
103 1.2624680412510747 8.916×107
104 1.2624671573796542 8.922×109
Three-point central difference formula 101 1.2719084899816118 4.457×103
102 1.2625569346253918 4.461×105
103 1.2624680412510747 4.461×107
104 1.2624671573796542 4.461×109

Code

The following is an example of a Python implementation for finding derivatives numerically for f(x)=2x1+x using the various three-point difference formulas at x0=4. The function func has derivative func_prime.

Higher derivatives

Using Taylor Series, it is possible to derive formulas to approximate the second (and higher order) derivatives of a general function. For a function f(x) and some number h>0, expanding it about x+h and xh gives

f(x+h)=f(x)+f(x)h+12f(x)h2+16f(x)h3+124f(4)(ξ1)h4 and f(xh)=f(x)f(x)h+12f(x)h216f(x)h3+124f(4)(ξ2)h4 where xh<ξ2<x<ξ1<x+h. Adding these two equations gives

f(x+h)+f(xh)=2f(x)+f(x)h2+124[f(4)(ξ1)+f(4)(ξ2)]h4.

If f(4) is continuous on [xh,x+h], then 12[f(4)(ξ1)+f(4)(ξ2)] is between f(4)(ξ1) and f(4)(ξ2). The Intermediate Value Theorem guarantees a number, say ξ, between ξ1 and ξ2. We can therefore write

f(x)=1h2[f(xh)2f(x)+f(x+h)]h212f(4)(ξ).

where xh<ξ<x+h.

Numerical Example

Consider approximating the second derivative of the function f(x)=exsinx at the point x0=π4.

Since f(x)=2excosx, the exact value is f(π4)=2eπ43.1017663.

Step Size (h) Approximation Absolute Error
101 3.096593338003672 5.17305×103
102 3.1017146973844056 5.16964×105
103 3.1017658768117684 5.17024×107
104 3.1017663770782633 1.67577×108

Arbitrary Derivatives

Using Newton's difference quotient, f(x)=limh0f(x+h)f(x)h the following can be shown[9] (for n > 0Script error: No such module "Check for unknown parameters".): f(n)(x)=limh01hnk=0n(1)k+n(nk)f(x+kh)

Complex-variable methods

The classical finite-difference approximations for numerical differentiation are ill-conditioned. However, if f is a holomorphic function, real-valued on the real line, which can be evaluated at points in the complex plane near x, then there are stable methods. For example,[10] the first derivative can be calculated by the complex-step derivative formula:[11][12][13] f(x)=(f(x+ih))h+O(h2),i2:=1.

The recommended step size to obtain accurate derivatives for a range of conditions is h=10200.[14] This formula can be obtained by Taylor series expansion: f(x+ih)=f(x)+ihf(x)12!h2f(x)i3!h3f(3)(x)+.

The complex-step derivative formula is only valid for calculating first-order derivatives. A generalization of the above for calculating derivatives of any order employs multicomplex numbers, resulting in multicomplex derivatives.[15][16][17] f(n)(x)𝒞n21(n)(f(x+i(1)h++i(n)h))hn where the i(k) denote the multicomplex imaginary units; i(1)i. The 𝒞k(n) operator extracts the kth component of a multicomplex number of level n, e.g., 𝒞0(n) extracts the real component and 𝒞n21(n) extracts the last, “most imaginary” component. The method can be applied to mixed derivatives, e.g. for a second-order derivative 2f(x,y)xy𝒞3(2)(f(x+i(1)h,y+i(2)h))h2

A C++ implementation of multicomplex arithmetics is available.[18]

In general, derivatives of any order can be calculated using Cauchy's integral formula:[19] f(n)(a)=n!2πiγf(z)(za)n+1dz, where the integration is done numerically.

Using complex variables for numerical differentiation was started by Lyness and Moler in 1967.[20] Their algorithm is applicable to higher-order derivatives.

A method based on numerical inversion of a complex Laplace transform was developed by Abate and Dubner.[21] An algorithm that can be used without requiring knowledge about the method or the character of the function was developed by Fornberg.[4]

Differential quadrature

Differential quadrature is the approximation of derivatives by using weighted sums of function values.[22][23] Differential quadrature is of practical interest because its allows one to compute derivatives from noisy data. The name is in analogy with quadrature, meaning numerical integration, where weighted sums are used in methods such as Simpson's rule or the trapezoidal rule. There are various methods for determining the weight coefficients, for example, the Savitzky–Golay filter. Differential quadrature is used to solve partial differential equations. There are further methods for computing derivatives from noisy data.[24]

See also

References

<templatestyles src="Reflist/styles.css" />

  1. Richard L. Burden, J. Douglas Faires (2000), Numerical Analysis, (7th Ed), Brooks/Cole. Template:Isbn.
  2. Script error: No such module "citation/CS1".
  3. Script error: No such module "citation/CS1".
  4. a b Numerical Differentiation of Analytic Functions, B Fornberg – ACM Transactions on Mathematical Software (TOMS), 1981.
  5. Sauer, Timothy (2012). Numerical Analysis. Pearson. p.248.
  6. Following Numerical Recipes in C, Chapter 5.7.
  7. p. 263.
  8. Abramowitz & Stegun, Table 25.2.
  9. Script error: No such module "citation/CS1".
  10. Using Complex Variables to Estimate Derivatives of Real Functions, W. Squire, G. Trapp – SIAM REVIEW, 1998.
  11. Script error: No such module "Citation/CS1".
  12. Differentiation With(out) a Difference by Nicholas Higham
  13. article from MathWorks blog, posted by Cleve Moler
  14. Script error: No such module "citation/CS1".
  15. Script error: No such module "citation/CS1".
  16. Script error: No such module "Citation/CS1".
  17. Script error: No such module "citation/CS1".
  18. Script error: No such module "citation/CS1".
  19. Ablowitz, M. J., Fokas, A. S.,(2003). Complex variables: introduction and applications. Cambridge University Press. Check theorem 2.6.2
  20. Script error: No such module "Citation/CS1".
  21. Script error: No such module "Citation/CS1".
  22. Differential Quadrature and Its Application in Engineering: Engineering Applications, Chang Shu, Springer, 2000, Template:Isbn.
  23. Advanced Differential Quadrature Methods, Yingyan Zhang, CRC Press, 2009, Template:Isbn.
  24. Script error: No such module "Citation/CS1".

Script error: No such module "Check for unknown parameters".

External links

Template:Sister project