<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://debianws.lexgopc.com/wiki143/index.php?action=history&amp;feed=atom&amp;title=Finite_difference_method</id>
	<title>Finite difference method - Revision history</title>
	<link rel="self" type="application/atom+xml" href="http://debianws.lexgopc.com/wiki143/index.php?action=history&amp;feed=atom&amp;title=Finite_difference_method"/>
	<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Finite_difference_method&amp;action=history"/>
	<updated>2026-05-04T19:01:10Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>http://debianws.lexgopc.com/wiki143/index.php?title=Finite_difference_method&amp;diff=3193970&amp;oldid=prev</id>
		<title>imported&gt;Robar Arnos: k = \Delta t, so the inclusion of both \Delta t and k is incorrect.</title>
		<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Finite_difference_method&amp;diff=3193970&amp;oldid=prev"/>
		<updated>2025-05-20T00:59:24Z</updated>

		<summary type="html">&lt;p&gt;k = \Delta t, so the inclusion of both \Delta t and k is incorrect.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{Short description|Class of numerical techniques}}&lt;br /&gt;
{{Differential equations}}&lt;br /&gt;
&lt;br /&gt;
In [[numerical analysis]], &amp;#039;&amp;#039;&amp;#039;finite-difference methods&amp;#039;&amp;#039;&amp;#039; (&amp;#039;&amp;#039;&amp;#039;FDM&amp;#039;&amp;#039;&amp;#039;) are a class of numerical techniques for solving [[differential equations]] by approximating [[Derivative|derivatives]] with [[Finite difference approximation|finite differences]]. Both the spatial domain and time domain (if applicable) are [[Discretization|discretized]], or broken into a finite number of intervals, and the values of the solution at the end points of the intervals are approximated by solving algebraic equations containing finite differences and values from nearby points.&lt;br /&gt;
&lt;br /&gt;
Finite difference methods convert [[ordinary differential equations]] (ODE) or [[partial differential equations]] (PDE), which may be [[Nonlinear partial differential equation|nonlinear]], into a [[system of linear equations]] that can be solved by [[matrix algebra]] techniques. Modern computers can perform these [[linear algebra]] computations efficiently, and this, along with their relative ease of implementation, has led to the widespread use of FDM in modern numerical analysis.&amp;lt;ref name=&amp;quot;GrossmannRoos2007&amp;quot; /&amp;gt;&lt;br /&gt;
Today, FDMs are one of the most common approaches to the numerical solution of PDE, along with [[Finite element method|finite element methods]].&amp;lt;ref name=&amp;quot;GrossmannRoos2007&amp;quot;&amp;gt;{{cite book|author1=Christian Grossmann|author2=Hans-G. Roos| author3=Martin Stynes|title=Numerical Treatment of Partial Differential Equations| url=https://archive.org/details/numericaltreatme00gros_820|url-access=limited| year=2007| publisher=Springer Science &amp;amp; Business Media| isbn=978-3-540-71584-9|page=[https://archive.org/details/numericaltreatme00gros_820/page/n34 23]}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Derive difference quotient from Taylor&amp;#039;s polynomial ==&lt;br /&gt;
&lt;br /&gt;
For a &amp;#039;&amp;#039;n&amp;#039;&amp;#039;-times differentiable function, by [[Taylor&amp;#039;s theorem]] the [[Taylor series]] expansion is given as&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;f(x_0 + h) = f(x_0) + \frac{f&amp;#039;(x_0)}{1!}h + \frac{f^{(2)}(x_0)}{2!}h^2 + \cdots + \frac{f^{(n)}(x_0)}{n!}h^n + R_n(x),&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where &amp;#039;&amp;#039;n&amp;#039;&amp;#039;! denotes the [[factorial]] of &amp;#039;&amp;#039;n&amp;#039;&amp;#039;, and &amp;#039;&amp;#039;R&amp;#039;&amp;#039;&amp;lt;sub&amp;gt;&amp;#039;&amp;#039;n&amp;#039;&amp;#039;&amp;lt;/sub&amp;gt;(&amp;#039;&amp;#039;x&amp;#039;&amp;#039;) is a remainder term, denoting the difference between the Taylor polynomial of degree &amp;#039;&amp;#039;n&amp;#039;&amp;#039; and the original function.&lt;br /&gt;
&lt;br /&gt;
Following is the process to derive an approximation for the first derivative of the function &amp;#039;&amp;#039;f&amp;#039;&amp;#039; by first truncating the Taylor polynomial plus remainder:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;f(x_0 + h) = f(x_0) + f&amp;#039;(x_0)h + R_1(x).&amp;lt;/math&amp;gt;&lt;br /&gt;
Dividing across by &amp;#039;&amp;#039;h&amp;#039;&amp;#039; gives:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;{f(x_0+h)\over h} = {f(x_0)\over h} + f&amp;#039;(x_0)+{R_1(x)\over h} &amp;lt;/math&amp;gt;&lt;br /&gt;
Solving for &amp;lt;math&amp;gt; f&amp;#039;(x_0) &amp;lt;/math&amp;gt;:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;f&amp;#039;(x_0) = {f(x_0 +h)-f(x_0)\over h} - {R_1(x)\over h}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assuming that &amp;lt;math&amp;gt;R_1(x)&amp;lt;/math&amp;gt; is sufficiently small, the approximation of the first derivative of &amp;#039;&amp;#039;f&amp;#039;&amp;#039; is:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;f&amp;#039;(x_0)\approx {f(x_0+h)-f(x_0)\over h}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is similar to the definition of derivative, which is:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;f&amp;#039;(x_0)=\lim_{h\to 0}\frac{f(x_0+h)-f(x_0)}{h}.&amp;lt;/math&amp;gt;&lt;br /&gt;
except for the limit towards zero (the method is named after this).&lt;br /&gt;
&lt;br /&gt;
== Accuracy and order ==&lt;br /&gt;
{{see also|Finite difference coefficient}}&lt;br /&gt;
&lt;br /&gt;
The error in a method&amp;#039;s solution is defined as the difference between the approximation and the exact analytical solution. The two sources of error in finite difference methods are [[round-off error]], the loss of precision due to computer rounding of decimal quantities, and [[truncation error]] or [[discretization error]], the difference between the exact solution of the original differential equation and the exact quantity assuming perfect arithmetic (no round-off).&lt;br /&gt;
&lt;br /&gt;
[[File:Finite Differences.svg|right|thumb|The finite difference method relies on discretizing a function on a grid.]]&lt;br /&gt;
To use a finite difference method to approximate the solution to a problem, one must first discretize the problem&amp;#039;s domain. This is usually done by dividing the domain into a uniform grid (see image). This means that finite-difference methods produce sets of discrete numerical approximations to the derivative, often in a &amp;quot;time-stepping&amp;quot; manner.&lt;br /&gt;
&lt;br /&gt;
An expression of general interest is the [[local truncation error]] of a method. Typically expressed using [[Big-O notation]], local truncation error refers to the error from a single application of a method. That is, it is the quantity &amp;lt;math&amp;gt;f&amp;#039;(x_i) - f&amp;#039;_i&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;f&amp;#039;(x_i)&amp;lt;/math&amp;gt; refers to the exact value and &amp;lt;math&amp;gt;f&amp;#039;_i&amp;lt;/math&amp;gt; to the numerical approximation. The remainder term of the Taylor polynomial can be used to analyze [[local truncation error]]. Using the [[Lagrange form]] of the remainder from the Taylor polynomial for &amp;lt;math&amp;gt;f(x_0 + h)&amp;lt;/math&amp;gt;, which is&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
 R_n(x_0 + h) = \frac{f^{(n+1)}(\xi)}{(n+1)!} (h)^{n+1} \, , \quad x_0 &amp;lt; \xi &amp;lt; x_0 + h,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
the dominant term of the local truncation error can be discovered. For example, again using the forward-difference formula for the first derivative, knowing that &amp;lt;math&amp;gt;f(x_i)=f(x_0+i h)&amp;lt;/math&amp;gt;,&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; f(x_0 + i h) = f(x_0) + f&amp;#039;(x_0)i h + \frac{f&amp;#039;&amp;#039;(\xi)}{2!} (i h)^{2}, &amp;lt;/math&amp;gt;&lt;br /&gt;
and with some algebraic manipulation, this leads to&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; \frac{f(x_0 + i h) - f(x_0)}{i h} = f&amp;#039;(x_0) + \frac{f&amp;#039;&amp;#039;(\xi)}{2!} i h, &amp;lt;/math&amp;gt;&lt;br /&gt;
and further noting that the quantity on the left is the approximation from the finite difference method and that the quantity on the right is the exact quantity of interest plus a remainder, clearly that remainder is the local truncation error. A final expression of this example and its order is:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; \frac{f(x_0 + i h) - f(x_0)}{i h} = f&amp;#039;(x_0) + O(h). &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, the local truncation error is proportional to the step sizes. The quality and duration of simulated FDM solution depends on the discretization equation selection and the step sizes (time and space steps). The data quality and simulation duration increase significantly with smaller step size.&amp;lt;ref name=IserlesA_1996&amp;gt;{{cite book|author1=Arieh Iserles|title= A first course in the numerical analysis of differential equations |url=https://archive.org/details/firstcoursenumer00iser|url-access=limited|year=2008|publisher= Cambridge University Press |isbn=9780521734905|page=[https://archive.org/details/firstcoursenumer00iser/page/n44 23]}}&amp;lt;/ref&amp;gt; Therefore, a reasonable balance between data quality and simulation duration is necessary for practical usage. Large time steps are useful for increasing simulation speed in practice. However, time steps which are too large may create instabilities and affect the data quality.&amp;lt;ref name=HoffmanJD_2001&amp;gt;{{cite book| author1=Hoffman JD|author2=Frankel S|year=2001|title=Numerical methods for engineers and scientists|publisher=CRC Press, Boca Raton}}&amp;lt;/ref&amp;gt;&amp;lt;ref name=Jaluria_CM_1994&amp;gt;{{cite journal|author1=Jaluria Y|author2=Atluri S|year=1994|title=Computational heat transfer| journal= Computational Mechanics| volume=14| issue=5| pages=385–386| doi=10.1007/BF00377593| bibcode=1994CompM..14..385J|s2cid=119502676 }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The [[von Neumann stability analysis|von Neumann]] and [[Courant–Friedrichs–Lewy condition|Courant-Friedrichs-Lewy]] criteria are often evaluated to determine the numerical model stability.&amp;lt;ref name=HoffmanJD_2001 /&amp;gt;&amp;lt;ref name=Jaluria_CM_1994 /&amp;gt;&amp;lt;ref&amp;gt;{{cite book|author1=Majumdar P|year=2005|title=Computational methods for heat and mass transfer|edition=1st|publisher=Taylor and Francis, New York}}&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;SmithGD_1985&amp;quot;&amp;gt;{{cite book|author1=Smith GD|year=1985|title=Numerical solution of partial differential equations: finite difference methods|publisher=Oxford University Press|edition=3rd}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Example: ordinary differential equation ==&lt;br /&gt;
For example, consider the ordinary differential equation&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; u&amp;#039;(x) = 3u(x) + 2. &amp;lt;/math&amp;gt;&lt;br /&gt;
The [[Euler method]] for solving this equation uses the finite difference quotient&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;\frac{u(x+h) - u(x)}{h} \approx u&amp;#039;(x)&amp;lt;/math&amp;gt;&lt;br /&gt;
to approximate the differential equation by first substituting it for u&amp;#039;(x) then applying a little algebra (multiplying both sides by h, and then adding u(x) to both sides) to get&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; u(x+h) \approx u(x) + h(3u(x)+2). &amp;lt;/math&amp;gt;&lt;br /&gt;
The last equation is a finite-difference equation, and solving this equation gives an approximate solution to the differential equation.&lt;br /&gt;
&lt;br /&gt;
==  Example: The heat equation ==&lt;br /&gt;
&lt;br /&gt;
Consider the normalized [[heat equation]] in one dimension, with homogeneous [[Dirichlet boundary condition]]s&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; \begin{cases}&lt;br /&gt;
U_t = U_{xx} \\&lt;br /&gt;
U(0,t) = U(1,t) = 0 &amp;amp; \text{(boundary condition)} \\&lt;br /&gt;
U(x,0) = U_0(x)     &amp;amp; \text{(initial condition)}&lt;br /&gt;
\end{cases} &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One way to numerically solve this equation is to approximate all the derivatives by finite differences. First partition the domain in space using a mesh &amp;lt;math&amp;gt; x_0, \dots, x_J &amp;lt;/math&amp;gt; and in time using a mesh &amp;lt;math&amp;gt; t_0, \dots, t_N &amp;lt;/math&amp;gt;. Assume a uniform partition both in space and in time, so the difference between two consecutive space points will be &amp;#039;&amp;#039;h&amp;#039;&amp;#039; and between two consecutive time points will be &amp;#039;&amp;#039;k&amp;#039;&amp;#039;. The points&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; u(x_j,t_n) = u_{j}^n &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
will represent the numerical approximation of &amp;lt;math&amp;gt; u(x_j, t_n). &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Explicit method===&lt;br /&gt;
[[File:Explicit method-stencil.svg|right|thumb|The [[Stencil (numerical analysis)|stencil]] for the most common explicit method for the heat equation.]]&lt;br /&gt;
Using a [[forward difference]] &amp;#039;&amp;#039;&amp;#039;at time &amp;lt;math&amp;gt; t_n &amp;lt;/math&amp;gt;&amp;#039;&amp;#039;&amp;#039; and a second-order [[central difference]] for the space derivative at position &amp;lt;math&amp;gt; x_j &amp;lt;/math&amp;gt; ([[FTCS scheme|FTCS]]) gives the recurrence equation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; \frac{u_{j}^{n+1} - u_{j}^{n}}{k} = \frac{u_{j+1}^n - 2u_{j}^n + u_{j-1}^n}{h^2}. &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is an [[explicit method]] for solving the one-dimensional [[heat equation]].&lt;br /&gt;
&lt;br /&gt;
One can obtain &amp;lt;math&amp;gt; u_j^{n+1} &amp;lt;/math&amp;gt; from the other values this way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; u_{j}^{n+1} = (1-2r)u_{j}^{n} + ru_{j-1}^{n} + ru_{j+1}^{n}  &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt; r=k/h^2. &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, with this recurrence relation, and knowing the values at time &amp;#039;&amp;#039;n&amp;#039;&amp;#039;, one can obtain the corresponding values at time &amp;#039;&amp;#039;n&amp;#039;&amp;#039;+1. &amp;lt;math&amp;gt; u_0^n &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; u_J^n &amp;lt;/math&amp;gt; must be replaced by the boundary conditions, in this example they are both 0.&lt;br /&gt;
&lt;br /&gt;
This explicit method is known to be [[numerically stable]] and [[limit of a sequence|convergent]] whenever &amp;lt;math&amp;gt; r\le 1/2 &amp;lt;/math&amp;gt;.&amp;lt;ref&amp;gt;Crank, J. &amp;#039;&amp;#039;The Mathematics of Diffusion&amp;#039;&amp;#039;. 2nd Edition, Oxford, 1975, p. 143.&amp;lt;/ref&amp;gt; The numerical errors are proportional to the time step and the square of the space step:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; \Delta u = O(k)+O(h^2)  &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Implicit method===&lt;br /&gt;
[[File:Implicit method-stencil.svg|right|thumb|The implicit method stencil.]]&lt;br /&gt;
Using the [[backward difference]] &amp;#039;&amp;#039;&amp;#039;at time &amp;lt;math&amp;gt; t_{n+1} &amp;lt;/math&amp;gt;&amp;#039;&amp;#039;&amp;#039; and a second-order central difference for the space derivative at position &amp;lt;math&amp;gt; x_j &amp;lt;/math&amp;gt; (The Backward Time, Centered Space Method &amp;quot;BTCS&amp;quot;) gives the recurrence equation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; \frac{u_{j}^{n+1} - u_{j}^{n}}{ k } =\frac{u_{j+1}^{n+1} - 2u_{j}^{n+1} + u_{j-1}^{n+1}}{h^2}. &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is an [[implicit method]] for solving the one-dimensional [[heat equation]].&lt;br /&gt;
&lt;br /&gt;
One can obtain &amp;lt;math&amp;gt; u_j^{n+1} &amp;lt;/math&amp;gt; from solving a system of linear equations:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; (1+2r)u_j^{n+1} - r u_{j-1}^{n+1} - ru_{j+1}^{n+1}= u_j^n &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The scheme is always [[numerically stable]] and convergent but usually more numerically intensive than the explicit method as it requires solving a system of numerical equations on each time step. The errors are  linear over the time step and quadratic over the space step:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; \Delta u = O(k)+O(h^2). &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Crank&amp;amp;ndash;Nicolson method===&lt;br /&gt;
Finally, using the central difference at time &amp;lt;math&amp;gt; t_{n+1/2} &amp;lt;/math&amp;gt; and a second-order central difference for the space derivative at position &amp;lt;math&amp;gt; x_j &amp;lt;/math&amp;gt; (&amp;quot;CTCS&amp;quot;) gives the recurrence equation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; \frac{u_j^{n+1} - u_j^{n}}{ k } = \frac{1}{2} \left(\frac{u_{j+1}^{n+1} - 2u_j^{n+1} + u_{j-1}^{n+1}}{h^2}+\frac{u_{j+1}^{n} - 2u_j^{n} + u_{j-1}^{n}}{h^2}\right). &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This formula is known as the [[Crank&amp;amp;ndash;Nicolson method]].&lt;br /&gt;
[[File:Crank-Nicolson-stencil.svg|right|thumb|The Crank&amp;amp;ndash;Nicolson stencil.]]&lt;br /&gt;
&lt;br /&gt;
One can obtain &amp;lt;math&amp;gt; u_j^{n+1} &amp;lt;/math&amp;gt; from solving a system of linear equations:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; (2+2r)u_j^{n+1} - ru_{j-1}^{n+1} - ru_{j+1}^{n+1}= (2-2r)u_j^n + ru_{j-1}^n + ru_{j+1}^n &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The scheme is always [[numerically stable]] and convergent but usually more numerically intensive as it requires solving a system of numerical equations on each time step. The errors are quadratic over both the time step and the space step:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; \Delta u = O(k^2)+O(h^2). &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Comparison===&lt;br /&gt;
&lt;br /&gt;
To summarize, usually the [[Crank–Nicolson method|Crank&amp;amp;ndash;Nicolson scheme]] is the most accurate scheme for small time steps. For larger time steps, the implicit scheme works better since it is less computationally demanding. The explicit scheme is the least accurate and can be unstable, but is also the easiest to implement and the least numerically intensive. &lt;br /&gt;
&lt;br /&gt;
Here is an example. The figures below present the solutions given by the above methods to approximate the heat equation &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;U_t = \alpha U_{xx}, \quad \alpha = \frac{1}{\pi^2},&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
with the boundary condition&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;U(0, t) = U(1, t) = 0.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The exact solution is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;U(x, t) = \frac{1}{\pi^2}e^{-t}\sin(\pi x).&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
&amp;lt;!-- Essential parameters --&amp;gt;&lt;br /&gt;
| perrow = 3&lt;br /&gt;
| align     = center&lt;br /&gt;
| direction = horizontal&lt;br /&gt;
| caption_align = center&lt;br /&gt;
| width     = 260&lt;br /&gt;
&amp;lt;!-- Extra parameters --&amp;gt;&lt;br /&gt;
| header = Comparison of Finite Difference Methods&lt;br /&gt;
| header_align = center&lt;br /&gt;
| header_background = &lt;br /&gt;
| footer = &lt;br /&gt;
| footer_align = &lt;br /&gt;
| footer_background = &lt;br /&gt;
| background color =&lt;br /&gt;
&lt;br /&gt;
|image1=HeatEquationExplicitApproximate.svg&lt;br /&gt;
|width1=260&lt;br /&gt;
|caption1=Explicit method (&amp;#039;&amp;#039;not&amp;#039;&amp;#039; stable)&lt;br /&gt;
|alt1= &amp;#039;&amp;#039;c&amp;#039;&amp;#039; = 4&lt;br /&gt;
&lt;br /&gt;
|image2=HeatEquationImplicitApproximate.svg&lt;br /&gt;
|width2=260&lt;br /&gt;
|caption2=Implicit method (stable)&lt;br /&gt;
|alt2= &amp;#039;&amp;#039;c&amp;#039;&amp;#039; = 6&lt;br /&gt;
&lt;br /&gt;
|image3=HeatEquationCNApproximate.svg&lt;br /&gt;
|width3=260&lt;br /&gt;
|caption3=Crank-Nicolson method (stable)&lt;br /&gt;
|alt3= &amp;#039;&amp;#039;c&amp;#039;&amp;#039; = 8.5&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== Example: The Laplace operator ==&lt;br /&gt;
The (continuous) [[Laplace operator]] in &amp;lt;math&amp;gt; n &amp;lt;/math&amp;gt;-dimensions is given by &amp;lt;math&amp;gt; \Delta u(x) = \sum_{i=1}^n \partial_i^2 u(x) &amp;lt;/math&amp;gt;.&lt;br /&gt;
The discrete Laplace operator &amp;lt;math&amp;gt; \Delta_h u &amp;lt;/math&amp;gt; depends on the dimension &amp;lt;math&amp;gt; n &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In 1D the Laplace operator is approximated as &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
  \Delta u(x) = u&amp;#039;&amp;#039;(x)&lt;br /&gt;
    \approx \frac{u(x-h)-2u(x)+u(x+h)}{h^2 }&lt;br /&gt;
    =: \Delta_h u(x) \,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
This approximation is usually expressed via the following [[Stencil_(numerical_analysis)|stencil]]&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
  \Delta_h = \frac{1}{h^2} &lt;br /&gt;
  \begin{bmatrix}&lt;br /&gt;
    1 &amp;amp; -2 &amp;amp; 1&lt;br /&gt;
  \end{bmatrix}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
and which represents a symmetric, tridiagonal matrix. &lt;br /&gt;
For an equidistant grid one gets a [[Toeplitz matrix]].&lt;br /&gt;
&lt;br /&gt;
The 2D case shows all the characteristics of the more general n-dimensional case. Each second partial derivative needs to be approximated similar to the 1D case &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
  \begin{align}&lt;br /&gt;
  \Delta u(x,y) &amp;amp;= u_{xx}(x,y)+u_{yy}(x,y) \\&lt;br /&gt;
    &amp;amp;\approx \frac{u(x-h,y)-2u(x,y)+u(x+h,y) }{h^2}&lt;br /&gt;
      + \frac{u(x,y-h) -2u(x,y) +u(x,y+h)}{h^2} \\&lt;br /&gt;
    &amp;amp;= \frac{u(x-h,y)+u(x+h,y) -4u(x,y)+u(x,y-h)+u(x,y+h)}{h^2} \\&lt;br /&gt;
    &amp;amp;=: \Delta_h u(x, y) \,,&lt;br /&gt;
  \end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
which is usually given by the following [[Stencil_(numerical_analysis)|stencil]]&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
  \Delta_h = &lt;br /&gt;
  \frac{1}{h^2} &lt;br /&gt;
  \begin{bmatrix}&lt;br /&gt;
      &amp;amp; 1 \\&lt;br /&gt;
    1 &amp;amp; -4 &amp;amp; 1 \\&lt;br /&gt;
      &amp;amp; 1 &lt;br /&gt;
  \end{bmatrix}&lt;br /&gt;
  \,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Consistency ===&lt;br /&gt;
Consistency of the above-mentioned approximation can be shown for highly regular functions, such as &amp;lt;math&amp;gt; u \in C^4(\Omega) &amp;lt;/math&amp;gt;.&lt;br /&gt;
The statement is&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
  \Delta u - \Delta_h u = \mathcal{O}(h^2) \,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To prove this, one needs to substitute [[Taylor Series]] expansions up to order 3 into the discrete Laplace operator.&lt;br /&gt;
&lt;br /&gt;
=== Properties ===&lt;br /&gt;
==== Subharmonic ====&lt;br /&gt;
Similar to [[Harmonic_function|continuous subharmonic functions]] one can define &amp;#039;&amp;#039;subharmonic functions&amp;#039;&amp;#039; for finite-difference approximations &amp;lt;math&amp;gt;u_h&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
  -\Delta_h u_h \leq 0 \,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Mean value ====&lt;br /&gt;
One can define a general [[Stencil_(numerical_analysis)|stencil]] of &amp;#039;&amp;#039;positive type&amp;#039;&amp;#039; via&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;  &lt;br /&gt;
  \begin{bmatrix}&lt;br /&gt;
      &amp;amp; \alpha_N \\&lt;br /&gt;
    \alpha_W &amp;amp; -\alpha_C &amp;amp; \alpha_E \\&lt;br /&gt;
      &amp;amp; \alpha_S&lt;br /&gt;
  \end{bmatrix}&lt;br /&gt;
  \,, \quad \alpha_i &amp;gt;0\,, \quad \alpha_C = \sum_{i\in \{N,E,S,W\}} \alpha_i \,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt; u_h &amp;lt;/math&amp;gt; is (discrete) subharmonic then the following&amp;#039;&amp;#039; mean value property&amp;#039;&amp;#039; holds&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
  u_h(x_C) \leq \frac{ \sum_{i\in \{N,E,S,W\}} \alpha_i u_h(x_i) }{ \sum_{i\in \{N,E,S,W\}} \alpha_i } \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where the approximation is evaluated on points of the grid, and the stencil is assumed to be of positive type.&lt;br /&gt;
&lt;br /&gt;
A similar [[Harmonic_function#The_mean_value_property|mean value property]] also holds for the continuous case.&lt;br /&gt;
&lt;br /&gt;
==== Maximum principle ====&lt;br /&gt;
For a (discrete) subharmonic function &amp;lt;math&amp;gt; u_h &amp;lt;/math&amp;gt; the following holds&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
  \max_{\Omega_h} u_h \leq \max_{\partial \Omega_h} u_h \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \Omega_h, \partial\Omega_h &amp;lt;/math&amp;gt; are discretizations of the continuous domain &amp;lt;math&amp;gt; \Omega &amp;lt;/math&amp;gt;, respectively the boundary &amp;lt;math&amp;gt; \partial \Omega &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
A similar [[Harmonic_function#Maximum_principle|maximum principle]] also holds for the continuous case.&lt;br /&gt;
&lt;br /&gt;
== The SBP-SAT method ==&lt;br /&gt;
The SBP-SAT (&amp;#039;&amp;#039;summation by parts - simultaneous approximation term&amp;#039;&amp;#039;) method is a stable and accurate technique for discretizing and imposing boundary conditions of a well-posed [[partial differential equation]] using high order finite differences.&amp;lt;ref&amp;gt;{{cite journal|author1=Bo Strand|year=1994|title=Summation by Parts for Finite Difference Approximations for d/dx|journal=Journal of Computational Physics|volume=110|issue=1|pages=47–67|doi= 10.1006/jcph.1994.1005|bibcode=1994JCoPh.110...47S}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{cite journal|author1=Mark H. Carpenter|author2=David I. Gottlieb|author3=Saul S. Abarbanel|year=1994|title=Time-stable boundary conditions for finite-difference schemes solving hyperbolic systems: Methodology and application to high-order compact schemes |journal=Journal of Computational Physics|volume=111|issue=2|pages=220–236|doi= 10.1006/jcph.1994.1057|bibcode=1994JCoPh.111..220C|hdl=2060/19930013937 |hdl-access=free}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The method is based on finite differences where the differentiation operators exhibit [[Summation by parts|summation-by-parts]] properties. Typically, these operators consist of differentiation matrices with central difference stencils in the interior with carefully chosen one-sided boundary stencils designed to mimic integration-by-parts in the discrete setting. Using the SAT technique, the boundary conditions of the PDE are imposed weakly, where the boundary values are &amp;quot;pulled&amp;quot; towards the desired conditions rather than exactly fulfilled. If the tuning parameters (inherent to the SAT technique) are chosen properly, the resulting system of ODE&amp;#039;s will exhibit similar energy behavior as the continuous PDE, i.e. the system has no non-physical energy growth. This guarantees stability if an integration scheme with a stability region that includes parts of the imaginary axis, such as the fourth order [[Runge-Kutta method]], is used. This makes the SAT technique an attractive method of imposing boundary conditions for higher order finite difference methods, in contrast to for example the injection method, which typically will not be stable if high order differentiation operators are used.&lt;br /&gt;
&lt;br /&gt;
==See also==&lt;br /&gt;
{{Div col|colwidth=20em}}&lt;br /&gt;
* [[Finite element method]]&lt;br /&gt;
* [[Finite difference]]&lt;br /&gt;
* [[Finite difference time domain]]&lt;br /&gt;
* [[Infinite difference method]]&lt;br /&gt;
* [[Stencil (numerical analysis)]]&lt;br /&gt;
* [[Finite difference coefficients]]&lt;br /&gt;
* [[Five-point stencil]]&lt;br /&gt;
* [[Lax–Richtmyer theorem]]&lt;br /&gt;
* [[Finite difference methods for option pricing]]&lt;br /&gt;
* [[Upwind differencing scheme for convection]]&lt;br /&gt;
* [[Central differencing scheme]]&lt;br /&gt;
* [[Discrete Poisson equation]]&lt;br /&gt;
* [[Discrete Laplace operator]]&lt;br /&gt;
{{div col end}}&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
{{Reflist}}&lt;br /&gt;
&lt;br /&gt;
==Further reading==&lt;br /&gt;
* K.W. Morton and D.F. Mayers, &amp;#039;&amp;#039;Numerical Solution of Partial Differential Equations, An Introduction&amp;#039;&amp;#039;. Cambridge University Press, 2005.&lt;br /&gt;
* Autar Kaw and E. Eric Kalu, &amp;#039;&amp;#039;Numerical Methods with Applications&amp;#039;&amp;#039;, (2008) [http://www.autarkaw.com/books/numericalmethods/index.html]. Contains a brief, engineering-oriented introduction to FDM (for ODEs) in [http://numericalmethods.eng.usf.edu/topics/finite_difference_method.html Chapter 08.07].&lt;br /&gt;
* {{cite book|author=John Strikwerda|title=Finite Difference Schemes and Partial Differential Equations|year=2004|publisher=SIAM|isbn=978-0-89871-639-9|edition=2nd}}&lt;br /&gt;
* {{Citation&lt;br /&gt;
 | last = Smith&lt;br /&gt;
 | first = G. D.&lt;br /&gt;
 | title = Numerical Solution of Partial Differential Equations: Finite Difference Methods, 3rd ed.&lt;br /&gt;
 | publisher = Oxford University Press&lt;br /&gt;
 | year = 1985&lt;br /&gt;
}}&lt;br /&gt;
* {{cite book|author=Peter Olver|author-link=Peter J. Olver|title=Introduction to Partial Differential Equations|year=2013|publisher=Springer|isbn=978-3-319-02099-0|at=Chapter 5: Finite differences|url=http://www.math.umn.edu/~olver/pde.html}}.&lt;br /&gt;
* [[Randall J. LeVeque]], &amp;#039;&amp;#039;[http://faculty.washington.edu/rjl/fdmbook/ Finite Difference Methods for Ordinary and Partial Differential Equations]&amp;#039;&amp;#039;, SIAM, 2007.&lt;br /&gt;
* Sergey Lemeshevsky, Piotr Matus, Dmitriy Poliakov(Eds): &amp;quot;Exact Finite-Difference Schemes&amp;quot;, De Gruyter (2016). DOI: https://doi.org/10.1515/9783110491326 .&lt;br /&gt;
* Mikhail Shashkov: &amp;#039;&amp;#039;Conservative Finite-Difference Methods on General Grids&amp;#039;&amp;#039;, CRC Press, ISBN 0-8493-7375-1 (1996). &lt;br /&gt;
&lt;br /&gt;
{{Numerical PDE}}&lt;br /&gt;
&lt;br /&gt;
{{Authority control}}&lt;br /&gt;
{{DEFAULTSORT:Finite Difference Method}}&lt;br /&gt;
[[Category:Finite differences]]&lt;br /&gt;
[[Category:Numerical differential equations]]&lt;/div&gt;</summary>
		<author><name>imported&gt;Robar Arnos</name></author>
	</entry>
</feed>