Variance: Difference between revisions

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
imported>OAbot
m Open access bot: url-access updated in citation with #oabot.
 
imported>Malparti
Undid revision 1326168813 by ~2025-39114-49 (talk) rv yet another edit by user on a mission to remove italics.
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
{{Short description|Statistical measure of how far values spread from their average}}  
{{short description|Statistical measure of how far values spread from their average}}  
{{About|the mathematical concept|other uses|Variance (disambiguation)}}
{{about|the mathematical concept|other uses|Variance (disambiguation)}}
[[File:Comparison standard deviations.svg|thumb|400px|right|Example of samples from two populations with the same mean but different variances. The red population has mean 100 and variance 100 (SD=10) while the blue population has mean 100 and variance 2500 (SD=50) where SD stands for Standard Deviation.]]
[[File:Comparison standard deviations.svg|thumb|400px|right|Example of samples from two populations with the same mean but different variances. The red population has mean {{math|1=''μ'' = 100}} and variance {{math|1=''σ''<sup>2</sup> = 100}} ({{math|1=''σ'' = 10}}), while the blue population has mean {{math|1=''μ'' = 100}} and variance {{math|1=''σ''<sup>2</sup> = 2500}} ({{math|1=''σ'' = 50}}).]]


In [[probability theory]] and [[statistics]], '''variance''' is the [[expected value]] of the [[squared deviations from the mean|squared deviation from the mean]] of a [[random variable]]. The [[standard deviation]] (SD) is obtained as the square root of the variance. Variance is a measure of [[statistical dispersion|dispersion]], meaning it is a measure of how far a set of numbers is spread out from their average value. It is the second [[central moment]] of a [[probability distribution|distribution]], and the [[covariance]] of the random variable with itself, and it is often represented by <math>\sigma^2</math>, <math>s^2</math>, <math>\operatorname{Var}(X)</math>, <math>V(X)</math>, or <math>\mathbb{V}(X)</math>.<ref>{{cite book |last1=Wasserman |first1=Larry |title=All of Statistics: a concise course in statistical inference |date=2005 |publisher=Springer texts in statistics |isbn=978-1-4419-2322-6 |page=51}}</ref>
In [[probability theory]] and [[statistics]], '''variance''' is the [[expected value]] of the [[squared deviations from the mean|squared deviation from the mean]] of a [[random variable]]. The [[standard deviation]] is obtained as the square root of the variance. Variance is a measure of [[statistical dispersion|dispersion]], meaning it is a measure of how far a set of numbers are spread out from their average value. It is the second [[central moment]] of a [[probability distribution|distribution]], and the [[covariance]] of the random variable with itself, and it is often represented by {{tmath| \sigma^2 }}, {{tmath| s^2 }}, {{tmath| \operatorname{Var}(X) }}, {{tmath| V(X) }}, or {{tmath| \mathbb{V}(X) }}.<ref>{{cite book |last1=Wasserman |first1=Larry |title=All of Statistics: a concise course in statistical inference |date=2005 |publisher=Springer texts in statistics |isbn=978-1-4419-2322-6 |page=51 }}</ref>


An advantage of variance as a measure of dispersion is that it is more amenable to algebraic manipulation than other measures of dispersion such as the [[Average absolute deviation|expected absolute deviation]]; for example, the variance of a sum of uncorrelated random variables is equal to the sum of their variances. A disadvantage of the variance for practical applications is that, unlike the standard deviation, its units differ from the random variable, which is why the standard deviation is more commonly reported as a measure of dispersion once the calculation is finished. Another disadvantage is that the variance is not finite for many distributions.
An advantage of variance as a measure of dispersion is that it is more amenable to algebraic manipulation than other measures of dispersion such as the [[Average absolute deviation|expected absolute deviation]]; for example, the variance of a sum of uncorrelated random variables is equal to the sum of their variances. A disadvantage of the variance for practical applications is that, unlike the standard deviation, its units differ from the random variable, which is why the standard deviation is more commonly reported as a measure of dispersion once the calculation is finished. Another disadvantage is that the variance is not finite for many distributions.


There are two distinct concepts that are both called "variance". One, as discussed above, is part of a theoretical [[probability distribution]] and is defined by an equation. The other variance is a characteristic of a set of observations. When variance is calculated from observations, those observations are typically measured from a real-world system. If all possible observations of the system are present, then the calculated variance is called the population variance. Normally, however, only a subset is available, and the variance calculated from this is called the sample variance. The variance calculated from a sample is considered an estimate of the full population variance. There are multiple ways to calculate an estimate of the population variance, as discussed in the section below.
There are two distinct concepts that are both called "variance". One, as discussed above, is part of a theoretical [[probability distribution]] and is defined by an equation. The other variance is a characteristic of a set of observations. When variance is calculated from observations, those observations are typically measured from a real-world system. If all possible observations of the system are present, then the calculated variance is called the population variance. Normally, however, only a subset is available, and the variance calculated from this is called the sample variance. The variance calculated from a sample is considered an estimate of the full population variance. There are multiple ways to estimate the population variance on the basis of the sample variance, as discussed in the section below.


The two kinds of variance are closely related. To see how, consider that a theoretical probability distribution can be used as a generator of hypothetical observations. If an infinite number of observations are generated using a distribution, then the sample variance calculated from that infinite set will match the value calculated using the distribution's equation for variance. Variance has a central role in statistics, where some ideas that use it include [[descriptive statistics]], [[statistical inference]], [[hypothesis testing]], [[goodness of fit]], and [[Monte Carlo method|Monte Carlo sampling]].
The two kinds of variance are closely related. To see how, consider that a theoretical probability distribution can be used as a generator of hypothetical observations. If an infinite number of observations are generated using a distribution, then the sample variance calculated from that infinite set will match the value calculated using the distribution's equation for variance. Variance has a central role in statistics, where some ideas that use it include [[descriptive statistics]], [[statistical inference]], [[hypothesis testing]], [[goodness of fit]], and [[Monte Carlo method|Monte Carlo sampling]].
Line 20: Line 20:
}}]]
}}]]


==Definition==
== Definition ==
The variance of a random variable <math>X</math> is the [[expected value]] of the [[Squared deviations from the mean|squared deviation from the mean]] of <math>X</math>, <math>\mu = \operatorname{E}[X]</math>:
The variance of a random variable <math>X</math> is the [[expected value]] of the [[Squared deviations from the mean|squared deviation from the mean]] of {{tmath| X }}, {{tmath|1= \mu = \operatorname{E}[X] }}:
<math display="block"> \operatorname{Var}(X) = \operatorname{E}\left[(X - \mu)^2 \right]. </math>
<math display="block"> \operatorname{Var}(X) = \operatorname{E}\left[(X - \mu)^2 \right]. </math>
This definition encompasses random variables that are generated by processes that are [[discrete random variable|discrete]], [[continuous random variable|continuous]], [[Cantor distribution|neither]], or mixed. The variance can also be thought of as the [[covariance]] of a random variable with itself:
This definition encompasses random variables that are generated by processes that are [[discrete random variable|discrete]], [[continuous random variable|continuous]], [[Cantor distribution|neither]], or mixed. The variance can also be thought of as the [[covariance]] of a random variable with itself:
<math display="block">\operatorname{Var}(X) = \operatorname{Cov}(X, X).</math>


<math display="block">\operatorname{Var}(X) = \operatorname{Cov}(X, X).</math>
The variance is also equivalent to the second [[cumulant]] of a probability distribution that generates {{tmath| X }}. The variance is typically designated as {{tmath| \operatorname{Var}(X) }}, or sometimes as <math>V(X)</math> or {{tmath| \mathbb{V}(X) }}, or symbolically as {{tmath| \sigma^2_X }} or simply <math>\sigma^2</math> (pronounced "[[sigma]] squared"). The expression for the variance can be expanded as follows:
The variance is also equivalent to the second [[cumulant]] of a probability distribution that generates <math>X</math>. The variance is typically designated as <math>\operatorname{Var}(X)</math>, or sometimes as <math>V(X)</math> or <math>\mathbb{V}(X)</math>, or symbolically as <math>\sigma^2_X</math> or simply <math>\sigma^2</math> (pronounced "[[sigma]] squared"). The expression for the variance can be expanded as follows:
<math display="block">\begin{align}
<math display="block">\begin{align}
\operatorname{Var}(X) &= \operatorname{E}\left[{\left(X - \operatorname{E}[X]\right)}^2\right] \\[4pt]
\operatorname{Var}(X) &= \operatorname{E}\left[{\left(X - \operatorname{E}[X]\right)}^2\right] \\[4pt]
Line 35: Line 35:
\end{align}</math>
\end{align}</math>


In other words, the variance of {{mvar|X}} is equal to the mean of the square of {{mvar|X}} minus the square of the mean of {{mvar|X}}.  This equation should not be used for computations using [[floating-point arithmetic]], because it suffers from [[catastrophic cancellation]] if the two components of the equation are similar in magnitude. For other numerically stable alternatives, see [[algorithms for calculating variance]].
In other words, the variance of {{mvar|X}} is equal to the mean of the square of {{mvar|X}} minus the square of the mean of {{mvar|X}}.  This equation should not be used for computations using [[floating-point arithmetic]], because it suffers from [[catastrophic cancellation]] if the two components of the equation are similar in magnitude. For other numerically stable alternatives, see ''[[Algorithms for calculating variance]]''.


===Discrete random variable===
=== Discrete random variable ===


If the generator of random variable <math>X</math> is [[Discrete probability distribution|discrete]] with [[probability mass function]] <math>x_1 \mapsto p_1, x_2 \mapsto p_2, \ldots, x_n \mapsto p_n</math>, then
If the generator of random variable <math>X</math> is [[Discrete probability distribution|discrete]] with [[probability mass function]] <math>x_1 \mapsto p_1, x_2 \mapsto p_2, \ldots, x_n \mapsto p_n</math>, then
<math display="block">\operatorname{Var}(X) = \sum_{i=1}^n p_i \cdot {\left(x_i - \mu\right)}^2,</math>
<math display="block">\operatorname{Var}(X) = \sum_{i=1}^n p_i \cdot {\left(x_i - \mu\right)}^2,</math>
where <math>\mu</math> is the expected value. That is,
where <math>\mu</math> is the expected value. That is,
<math display="block">\mu = \sum_{i=1}^n p_i x_i .</math>
<math display="block">\mu = \sum_{i=1}^n p_i x_i .</math>
(When such a discrete [[weighted variance]] is specified by weights whose sum is not&nbsp;1, then one divides by the sum of the weights.)
(When such a discrete [[weighted variance]] is specified by weights whose sum is not&nbsp;1, then one divides by the sum of the weights.)


The variance of a collection of <math>n</math> equally likely values can be written as
The variance of a collection of <math>n</math> equally likely values can be written as
<math display="block"> \operatorname{Var}(X) = \frac{1}{n} \sum_{i=1}^n (x_i - \mu)^2 </math>
<math display="block"> \operatorname{Var}(X) = \frac{1}{n} \sum_{i=1}^n (x_i - \mu)^2 </math>
where <math>\mu</math> is the average value. That is,
where <math>\mu</math> is the average value. That is,
<math display="block">\mu = \frac{1}{n}\sum_{i=1}^n x_i .</math>
<math display="block">\mu = \frac{1}{n}\sum_{i=1}^n x_i .</math>


The variance of a set of <math>n</math> equally likely values can be equivalently expressed, without directly referring to the mean, in terms of squared deviations of all pairwise squared distances of points from each other:<ref>{{cite conference|author=Yuli Zhang |author2=Huaiyu Wu |author3=Lei Cheng |date=June 2012|title=Some new deformation formulas about variance and covariance|conference=Proceedings of 4th International Conference on Modelling, Identification and Control(ICMIC2012)|pages=987–992}}</ref>
The variance of a set of <math>n</math> equally likely values can be equivalently expressed, without directly referring to the mean, in terms of squared deviations of all pairwise squared distances of points from each other:<ref>{{cite conference|author=Yuli Zhang |author2=Huaiyu Wu |author3=Lei Cheng |date=June 2012|title=Some new deformation formulas about variance and covariance|conference=Proceedings of 4th International Conference on Modelling, Identification and Control(ICMIC2012)|pages=987–992}}</ref>
<math display="block"> \operatorname{Var}(X) = \frac{1}{n^2} \sum_{i=1}^n \sum_{j=1}^n \frac{1}{2} {\left(x_i - x_j\right)}^2 = \frac{1}{n^2} \sum_i \sum_{j>i} {\left(x_i - x_j\right)}^2. </math>
<math display="block"> \operatorname{Var}(X) = \frac{1}{n^2} \sum_{i=1}^n \sum_{j=1}^n \frac{1}{2} {\left(x_i - x_j\right)}^2 = \frac{1}{n^2} \sum_i \sum_{j>i} {\left(x_i - x_j\right)}^2. </math>


===Absolutely continuous random variable===
=== Absolutely continuous random variable ===
 
If the random variable <math>X</math> has a [[probability density function]] <math>f(x)</math>, and <math>F(x)</math> is the corresponding [[cumulative distribution function]], then


If the random variable <math>X</math> has a [[probability density function]] {{tmath| f(x) }}, and <math>F(x)</math> is the corresponding [[cumulative distribution function]], then
<math display="block">\begin{align}
<math display="block">\begin{align}
   \operatorname{Var}(X) = \sigma^2 &= \int_{\R} {\left(x - \mu\right)}^2 f(x) \, dx \\[4pt]
   \operatorname{Var}(X) = \sigma^2 &= \int_{\R} {\left(x - \mu\right)}^2 f(x) \, dx \\[4pt]
Line 72: Line 63:
     &= \int_{\R} x^2 \,dF(x) - \mu^2,
     &= \int_{\R} x^2 \,dF(x) - \mu^2,
  \end{align}</math>
  \end{align}</math>
or equivalently,
or equivalently,
<math display="block">\operatorname{Var}(X) = \int_{\R} x^2 f(x) \,dx - \mu^2 ,</math>
<math display="block">\operatorname{Var}(X) = \int_{\R} x^2 f(x) \,dx - \mu^2 ,</math>
where <math>\mu</math> is the expected value of <math>X</math> given by
where <math>\mu</math> is the expected value of <math>X</math> given by
<math display="block">\mu = \int_{\R} x f(x) \, dx = \int_{\R} x \, dF(x). </math>
<math display="block">\mu = \int_{\R} x f(x) \, dx = \int_{\R} x \, dF(x). </math>


In these formulas, the integrals with respect to <math>dx</math> and <math>dF(x)</math>
In these formulas, the integrals with respect to <math>dx</math> and <math>dF(x)</math> are [[Lebesgue integral|Lebesgue]] and [[Lebesgue–Stieltjes integration|Lebesgue–Stieltjes]] integrals, respectively.
are [[Lebesgue integral|Lebesgue]] and [[Lebesgue–Stieltjes integration|Lebesgue–Stieltjes]] integrals, respectively.


If the function <math>x^2f(x)</math> is [[Riemann-integrable]] on every finite interval <math>[a,b]\subset\R,</math> then
If the function <math>x^2f(x)</math> is [[Riemann-integrable]] on every finite interval <math>[a,b]\subset\R,</math> then
<math display="block">\operatorname{Var}(X) = \int^{+\infty}_{-\infty} x^2 f(x) \, dx - \mu^2, </math>
<math display="block">\operatorname{Var}(X) = \int^{+\infty}_{-\infty} x^2 f(x) \, dx - \mu^2, </math>
where the integral is an [[improper Riemann integral]].
where the integral is an [[improper Riemann integral]].


==Examples==
== Examples ==


===Exponential distribution===
=== Exponential distribution ===
The [[exponential distribution]] with parameter {{mvar|&lambda;}} > 0 is a continuous distribution whose [[probability density function]] is given by
The [[exponential distribution]] with parameter {{mvar|&lambda;}} > 0 is a continuous distribution whose [[probability density function]] is given by
<math display="block">f(x) = \lambda e^{-\lambda x}</math>
<math display="block">f(x) = \lambda e^{-\lambda x}</math>
Line 109: Line 93:
<math display="block">\operatorname{Var}(X) = \operatorname{E}\left[X^2\right] - \operatorname{E}[X]^2 = \frac{2}{\lambda^2} - \left(\frac{1}{\lambda}\right)^2 = \frac{1}{\lambda^2}.</math>
<math display="block">\operatorname{Var}(X) = \operatorname{E}\left[X^2\right] - \operatorname{E}[X]^2 = \frac{2}{\lambda^2} - \left(\frac{1}{\lambda}\right)^2 = \frac{1}{\lambda^2}.</math>


===Fair die<!--Singular: die; plural: dice. Don't change-->===
=== Fair die <!--Singular: die; plural: dice. Don't change--> ===
A fair [[dice|six-sided die]] can be modeled as a discrete random variable, {{mvar|X}}, with outcomes 1 through 6, each with equal probability 1/6.  The expected value of {{mvar|X}} is <math>(1 + 2 + 3 + 4 + 5 + 6)/6 = 7/2.</math> Therefore, the variance of {{mvar|X}} is
A fair [[dice|six-sided die]] can be modeled as a discrete random variable, {{mvar|X}}, with outcomes 1 through 6, each with equal probability 1/6.  The expected value of {{mvar|X}} is <math>(1 + 2 + 3 + 4 + 5 + 6)/6 = 7/2.</math> Therefore, the variance of {{mvar|X}} is
<math display="block">\begin{align}
<math display="block">\begin{align}
Line 168: Line 152:
|}
|}


==Properties==
== Properties ==


===Basic properties===
=== Basic properties ===
Variance is non-negative because the squares are positive or zero:
Variance is non-negative because the squares are positive or zero:
<math display="block">\operatorname{Var}(X)\ge 0.</math>
<math display="block">\operatorname{Var}(X)\ge 0.</math>
Line 180: Line 164:
<math display="block">\operatorname{Var}(X)= 0 \iff \exists a : P(X=a) = 1.</math>
<math display="block">\operatorname{Var}(X)= 0 \iff \exists a : P(X=a) = 1.</math>


===Issues of finiteness===
=== Issues of finiteness ===
If a distribution does not have a finite expected value, as is the case for the [[Cauchy distribution]], then the variance cannot be finite either. However, some distributions may not have a finite variance, despite their expected value being finite. An example is a [[Pareto distribution]] whose [[Pareto index|index]] <math>k</math> satisfies <math>1 < k \leq 2.</math>
If a distribution does not have a finite expected value, as is the case for the [[Cauchy distribution]], then the variance cannot be finite either. However, some distributions may not have a finite variance, despite their expected value being finite. An example is a [[Pareto distribution]] whose [[Pareto index|index]] <math>k</math> satisfies <math>1 < k \leq 2.</math>


===Decomposition===
=== Decomposition ===


The general formula for variance decomposition or the [[law of total variance]] is: If <math>X</math> and <math>Y</math> are two random variables, and the variance of <math>X</math> exists, then
The general formula for variance decomposition or the [[law of total variance]] is: If <math>X</math> and <math>Y</math> are two random variables, and the variance of <math>X</math> exists, then
<math display="block">\operatorname{Var}[X] = \operatorname{E}(\operatorname{Var}[X\mid Y]) + \operatorname{Var}(\operatorname{E}[X\mid Y]).</math>
<math display="block">\operatorname{Var}[X] = \operatorname{E}(\operatorname{Var}[X\mid Y]) + \operatorname{Var}(\operatorname{E}[X\mid Y]).</math>


The [[conditional expectation]] <math>\operatorname E(X\mid Y)</math> of <math>X</math> given <math>Y</math>, and the [[conditional variance]] <math>\operatorname{Var}(X\mid Y)</math> may be understood as follows. Given any particular value ''y'' of&nbsp;the random variable&nbsp;''Y'', there is a conditional expectation <math>\operatorname E(X\mid Y=y)</math>  given the event&nbsp;''Y''&nbsp;=&nbsp;''y''. This quantity depends on the particular value&nbsp;''y''; it is a function <math> g(y) = \operatorname E(X\mid Y=y)</math>. That same function evaluated at the random variable ''Y'' is the conditional expectation <math>\operatorname E(X\mid Y) = g(Y).</math>
The [[conditional expectation]] <math>\operatorname E(X\mid Y)</math> of <math>X</math> given <math>Y</math>, and the [[conditional variance]] <math>\operatorname{Var}(X\mid Y)</math> may be understood as follows. Given any particular value ''y'' of&nbsp;the random variable&nbsp;''Y'', there is a conditional expectation <math>\operatorname E(X\mid Y=y)</math>  given the event&nbsp;''Y''&nbsp;=&nbsp;''y''. This quantity depends on the particular value&nbsp;''y''; it is a function <math> g(y) = \operatorname E(X\mid Y=y)</math>. That same function evaluated at the random variable ''Y'' is the conditional expectation {{tmath|1= \operatorname E(X\mid Y) = g(Y) }}.


In particular, if <math>Y</math> is a discrete random variable assuming possible values <math>y_1, y_2, y_3 \ldots</math> with corresponding probabilities <math>p_1, p_2, p_3 \ldots, </math>, then in the formula for total variance, the first term on the right-hand side becomes
In particular, if <math>Y</math> is a discrete random variable assuming possible values <math>y_1, y_2, y_3 \ldots</math> with corresponding probabilities <math>p_1, p_2, p_3 \ldots, </math>, then in the formula for total variance, the first term on the right-hand side becomes
<math display="block">\operatorname{E}(\operatorname{Var}[X \mid Y]) = \sum_i p_i \sigma^2_i,</math>
<math display="block">\operatorname{E}(\operatorname{Var}[X \mid Y]) = \sum_i p_i \sigma^2_i,</math>
where <math>\sigma^2_i = \operatorname{Var}[X \mid Y = y_i]</math>. Similarly, the second term on the right-hand side becomes
where <math>\sigma^2_i = \operatorname{Var}[X \mid Y = y_i]</math>. Similarly, the second term on the right-hand side becomes
<math display="block">\operatorname{Var}(\operatorname{E}[X \mid Y]) = \sum_i p_i \mu_i^2 - \left(\sum_i p_i \mu_i\right)^2 = \sum_i p_i \mu_i^2 - \mu^2,</math>
<math display="block">\operatorname{Var}(\operatorname{E}[X \mid Y]) = \sum_i p_i \mu_i^2 - \left(\sum_i p_i \mu_i\right)^2 = \sum_i p_i \mu_i^2 - \mu^2,</math>
 
where {{tmath|= \mu_i = \operatorname{E}[X \mid Y = y_i] }} and {{tmath|1= \textstyle \mu = \sum_i p_i \mu_i }}. Thus the total variance is given by
where <math>\mu_i = \operatorname{E}[X \mid Y = y_i]</math> and <math>\mu = \sum_i p_i \mu_i</math>. Thus the total variance is given by
 
<math display="block">\operatorname{Var}[X] = \sum_i p_i \sigma^2_i + \left( \sum_i p_i \mu_i^2 - \mu^2 \right).</math>
<math display="block">\operatorname{Var}[X] = \sum_i p_i \sigma^2_i + \left( \sum_i p_i \mu_i^2 - \mu^2 \right).</math>


A similar formula is applied in [[analysis of variance]], where the corresponding formula is
A similar formula is applied in [[analysis of variance]], where the corresponding formula is\
 
<math display="block">\mathit{MS}_\text{total} = \mathit{MS}_\text{between} + \mathit{MS}_\text{within};</math>
<math display="block">\mathit{MS}_\text{total} = \mathit{MS}_\text{between} + \mathit{MS}_\text{within};</math>
here <math>\mathit{MS}</math> refers to the Mean of the Squares. In [[linear regression]] analysis the corresponding formula is
here <math>\mathit{MS}</math> refers to the Mean of the Squares. In [[linear regression]] analysis the corresponding formula is
<math display="block">\mathit{MS}_\text{total} = \mathit{MS}_\text{regression} + \mathit{MS}_\text{residual}.</math>
<math display="block">\mathit{MS}_\text{total} = \mathit{MS}_\text{regression} + \mathit{MS}_\text{residual}.</math>


Line 217: Line 192:
<math display="block">\mathit{SS}_\text{total} = \mathit{SS}_\text{regression} + \mathit{SS}_\text{residual}.</math>
<math display="block">\mathit{SS}_\text{total} = \mathit{SS}_\text{regression} + \mathit{SS}_\text{residual}.</math>


===Calculation from the CDF===
=== Calculation from the CDF ===


The population variance for a non-negative random variable can be expressed in terms of the [[cumulative distribution function]] ''F'' using
The population variance for a non-negative random variable can be expressed in terms of the [[cumulative distribution function]] ''F'' using
<math display="block">2\int_0^\infty u(1 - F(u))\,du - {\left[\int_0^\infty (1 - F(u))\,du\right]}^2.</math>
<math display="block">2\int_0^\infty u(1 - F(u))\,du - {\left[\int_0^\infty (1 - F(u))\,du\right]}^2.</math>


This expression can be used to calculate the variance in situations where the CDF, but not the [[probability density function|density]], can be conveniently expressed.
This expression can be used to calculate the variance in situations where the CDF, but not the [[probability density function|density]], can be conveniently expressed.


===Characteristic property===
=== Characteristic property ===
The second [[moment (mathematics)|moment]] of a random variable attains the minimum value when taken around the first moment (i.e., mean) of the random variable, i.e. <math>\mathrm{argmin}_m \, \mathrm{E}\left(\left(X - m\right)^2\right) = \mathrm{E}(X)</math>. Conversely, if a continuous function <math>\varphi</math> satisfies <math>\mathrm{argmin}_m\,\mathrm{E}(\varphi(X - m)) = \mathrm{E}(X)</math> for all random variables ''X'', then it is necessarily of the form <math>\varphi(x) = a x^2 + b</math>, where {{nowrap|''a'' > 0}}. This also holds in the multidimensional case.<ref>{{Cite journal | last1 = Kagan | first1 = A. | last2 = Shepp | first2 = L. A. | doi = 10.1016/S0167-7152(98)00041-8 | title = Why the variance? | journal = Statistics & Probability Letters | volume = 38 | issue = 4 | pages = 329–333 | year = 1998 }}</ref>
The second [[moment (mathematics)|moment]] of a random variable attains the minimum value when taken around the first moment (i.e., mean) of the random variable, i.e. {{tmath|1= \mathrm{argmin}_m \, \mathrm{E}\left(\left(X - m\right)^2\right) = \mathrm{E}(X) }}. Conversely, if a continuous function <math>\varphi</math> satisfies <math>\mathrm{argmin}_m\,\mathrm{E}(\varphi(X - m)) = \mathrm{E}(X)</math> for all random variables {{math|''X''}}, then it is necessarily of the form {{tmath|1= \varphi(x) = a x^2 + b }}, where {{math|''a'' > 0}}. This also holds in the multidimensional case.<ref>{{Cite journal | last1 = Kagan | first1 = A. | last2 = Shepp | first2 = L. A. | doi = 10.1016/S0167-7152(98)00041-8 | title = Why the variance? | journal = Statistics & Probability Letters | volume = 38 | issue = 4 | pages = 329–333 | year = 1998 }}</ref>


===Units of measurement===
=== Units of measurement ===


Unlike the [[Average absolute deviation|expected absolute deviation]], the variance of a variable has units that are the square of the units of the variable itself.  For example, a variable measured in meters will have a variance measured in meters squared.  For this reason, describing data sets via their [[standard deviation]] or [[root mean square deviation]] is often preferred over using the variance.  In the dice example the standard deviation is {{math|{{sqrt|2.9}} ≈ 1.7}}, slightly larger than the expected absolute deviation of&nbsp;1.5.
Unlike the [[Average absolute deviation|expected absolute deviation]], the variance of a variable has units that are the square of the units of the variable itself.  For example, a variable measured in meters will have a variance measured in meters squared.  For this reason, describing data sets via their [[standard deviation]] or [[root mean square deviation]] is often preferred over using the variance.  In the dice example the standard deviation is {{math|{{sqrt|2.9}} ≈ 1.7}}, slightly larger than the expected absolute deviation of&nbsp;1.5.
Line 234: Line 208:
The standard deviation and the expected absolute deviation can both be used as an indicator of the "spread" of a distribution.  The standard deviation is more amenable to algebraic manipulation than the expected absolute deviation, and, together with variance and its generalization [[covariance]], is used frequently in theoretical statistics; however the expected absolute deviation tends to be more [[Robust statistics|robust]] as it is less sensitive to [[outlier]]s arising from [[measurement error|measurement anomalies]] or an unduly [[heavy-tailed distribution]].
The standard deviation and the expected absolute deviation can both be used as an indicator of the "spread" of a distribution.  The standard deviation is more amenable to algebraic manipulation than the expected absolute deviation, and, together with variance and its generalization [[covariance]], is used frequently in theoretical statistics; however the expected absolute deviation tends to be more [[Robust statistics|robust]] as it is less sensitive to [[outlier]]s arising from [[measurement error|measurement anomalies]] or an unduly [[heavy-tailed distribution]].


==Propagation==
== Propagation ==


===Addition and multiplication by a constant===
=== Addition and multiplication by a constant ===
Variance is [[Invariant (mathematics)|invariant]] with respect to changes in a [[location parameter]].  That is, if a constant is added to all values of the variable, the variance is unchanged:
Variance is [[Invariant (mathematics)|invariant]] with respect to changes in a [[location parameter]].  That is, if a constant is added to all values of the variable, the variance is unchanged:
<math display="block">\operatorname{Var}(X+a)=\operatorname{Var}(X).</math>
<math display="block">\operatorname{Var}(X+a)=\operatorname{Var}(X).</math>
Line 248: Line 222:
\operatorname{Var}(aX - bY) &= a^2\operatorname{Var}(X) + b^2\operatorname{Var}(Y) - 2ab\, \operatorname{Cov}(X,Y)
\operatorname{Var}(aX - bY) &= a^2\operatorname{Var}(X) + b^2\operatorname{Var}(Y) - 2ab\, \operatorname{Cov}(X,Y)
\end{align}</math>
\end{align}</math>
where <math>\operatorname{Cov}(X,Y)</math> is the [[covariance]].
where <math>\operatorname{Cov}(X,Y)</math> is the [[covariance]].


=== Linear combinations ===
=== Linear combinations ===
In general, for the sum of <math>N</math> random variables <math>\{X_1,\dots,X_N\}</math>, the variance becomes:
In general, for the sum of <math>N</math> random variables {{tmath| \{X_1,\dots,X_N\} }}, the variance becomes:
<math display="block">\operatorname{Var}\left(\sum_{i=1}^N X_i\right) = \sum_{i,j=1}^N\operatorname{Cov}(X_i,X_j) = \sum_{i=1}^N\operatorname{Var}(X_i) + \sum_{i,j=1,i\ne j}^N\operatorname{Cov}(X_i,X_j),</math>  
<math display="block">\operatorname{Var}\left(\sum_{i=1}^N X_i\right) = \sum_{i,j=1}^N\operatorname{Cov}(X_i,X_j) = \sum_{i=1}^N\operatorname{Var}(X_i) + \sum_{i,j=1,i\ne j}^N\operatorname{Cov}(X_i,X_j),</math>  
see also general [[Bienaymé's identity]].
see also general [[Bienaymé's identity]].


These results lead to the variance of a [[linear combination]] as:
These results lead to the variance of a [[linear combination]] as:
<math display="block">\begin{align}
<math display="block">\begin{align}
\operatorname{Var}\left( \sum_{i=1}^N a_iX_i\right) &=\sum_{i,j=1}^{N} a_ia_j\operatorname{Cov}(X_i,X_j) \\
\operatorname{Var}\left( \sum_{i=1}^N a_iX_i\right) &=\sum_{i,j=1}^{N} a_ia_j\operatorname{Cov}(X_i,X_j) \\
Line 267: Line 239:
<math display="block">\operatorname{Cov}(X_i,X_j)=0\ ,\ \forall\ (i\ne j) ,</math>
<math display="block">\operatorname{Cov}(X_i,X_j)=0\ ,\ \forall\ (i\ne j) ,</math>
then they are said to be [[Covariance#Definition|uncorrelated]]. It follows immediately from the expression given earlier that if the random variables <math>X_1,\dots,X_N</math> are uncorrelated, then the variance of their sum is equal to the sum of their variances, or, expressed symbolically:
then they are said to be [[Covariance#Definition|uncorrelated]]. It follows immediately from the expression given earlier that if the random variables <math>X_1,\dots,X_N</math> are uncorrelated, then the variance of their sum is equal to the sum of their variances, or, expressed symbolically:
<math display="block">\operatorname{Var}\left(\sum_{i=1}^N X_i\right) = \sum_{i=1}^N\operatorname{Var}(X_i).</math>
<math display="block">\operatorname{Var}\left(\sum_{i=1}^N X_i\right) = \sum_{i=1}^N\operatorname{Var}(X_i).</math>


Line 274: Line 245:
====Matrix notation for the variance of a linear combination====
====Matrix notation for the variance of a linear combination====


Define <math>X</math> as a column vector of <math>n</math> random variables <math>X_1, \ldots,X_n</math>, and <math>c</math> as a column vector of <math>n</math> scalars <math>c_1, \ldots,c_n</math>. Therefore, <math>c^\mathsf{T} X</math> is a [[linear combination]] of these random variables, where <math>c^\mathsf{T}</math> denotes the [[transpose]] of <math>c</math>. Also let <math>\Sigma</math> be the [[covariance matrix]] of <math>X</math>. The variance of <math>c^\mathsf{T}X</math> is then given by:<ref>{{Cite book | last1=Johnson | first1=Richard | last2=Wichern | first2=Dean | year=2001 | title=Applied Multivariate Statistical Analysis | url=https://archive.org/details/appliedmultivari00john_130 | url-access=limited | publisher=Prentice Hall | page=[https://archive.org/details/appliedmultivari00john_130/page/n96 76] | isbn=0-13-187715-1 }}</ref>
Define <math>X</math> as a column vector of <math>n</math> random variables {{tmath| X_1, \ldots,X_n }}, and <math>c</math> as a column vector of <math>n</math> scalars {{tmath| c_1, \ldots,c_n }}. Therefore, <math>c^\mathsf{T} X</math> is a [[linear combination]] of these random variables, where <math>c^\mathsf{T}</math> denotes the [[transpose]] of {{tmath| c }}. Also let <math>\Sigma</math> be the [[covariance matrix]] of {{tmath| X }}. The variance of <math>c^\mathsf{T}X</math> is then given by:<ref>{{cite book | last1=Johnson | first1=Richard | last2=Wichern | first2=Dean | year=2001 | title=Applied Multivariate Statistical Analysis | url=https://archive.org/details/appliedmultivari00john_130 | url-access=limited | publisher=Prentice Hall | page=[https://archive.org/details/appliedmultivari00john_130/page/n96 76] | isbn=0-13-187715-1 }}</ref>
 
<math display="block">\operatorname{Var}\left(c^\mathsf{T} X\right) = c^\mathsf{T} \Sigma c .</math>
<math display="block">\operatorname{Var}\left(c^\mathsf{T} X\right) = c^\mathsf{T} \Sigma c .</math>


This implies that the variance of the mean can be written as (with a column vector of ones)
This implies that the variance of the mean can be written as (with a column vector of ones)
<math display="block">\operatorname{Var}\left(\bar{x}\right) = \operatorname{Var}\left(\frac{1}{n} 1'X\right) = \frac{1}{n^2} 1'\Sigma 1.</math>
<math display="block">\operatorname{Var}\left(\bar{x}\right) = \operatorname{Var}\left(\frac{1}{n} 1'X\right) = \frac{1}{n^2} 1'\Sigma 1.</math>


===Sum of variables===
=== Sum of variables ===


====Sum of uncorrelated variables====
==== Sum of uncorrelated variables ====
{{main article|Bienaymé's identity}}
{{main article|Bienaymé's identity}}
{{see also|Sum of normally distributed random variables}}
{{see also|Sum of normally distributed random variables}}
One reason for the use of the variance in preference to other measures of dispersion is that the variance of the sum (or the difference) of [[uncorrelated]] random variables is the sum of their variances:
One reason for the use of the variance in preference to other measures of dispersion is that the variance of the sum (or the difference) of [[uncorrelated]] random variables is the sum of their variances:
<math display="block">\operatorname{Var}\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n \operatorname{Var}(X_i).</math>
<math display="block">\operatorname{Var}\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n \operatorname{Var}(X_i).</math>


This statement is called the [[Irénée-Jules Bienaymé|Bienaymé]] formula<ref>[[Michel Loève|Loève, M.]] (1977) "Probability Theory", ''Graduate Texts in Mathematics'', Volume 45, 4th edition, Springer-Verlag, p.&nbsp;12.</ref> and was discovered in 1853.<ref>[[Irénée-Jules Bienaymé|Bienaymé, I.-J.]] (1853) "Considérations à l'appui de la découverte de Laplace sur la loi de probabilité dans la méthode des moindres carrés", ''Comptes rendus de l'Académie des sciences Paris'', 37, p.&nbsp;309–317; digital copy available [http://visualiseur.bnf.fr/CadresFenetre?O=NUMM-2994&I=313] {{Webarchive|url=https://web.archive.org/web/20180623145935/http://visualiseur.bnf.fr/CadresFenetre?O=NUMM-2994&I=313|date=2018-06-23}}</ref><ref>[[Irénée-Jules Bienaymé|Bienaymé, I.-J.]] (1867) "Considérations à l'appui de la découverte de Laplace sur la loi de probabilité dans la méthode des moindres carrés", ''Journal de Mathématiques Pures et Appliquées, Série 2'', Tome 12, p.&nbsp;158–167; digital copy available [http://gallica.bnf.fr/ark:/12148/bpt6k16411c/f166.image.n19][http://sites.mathdoc.fr/JMPA/PDF/JMPA_1867_2_12_A10_0.pdf]</ref> It is often made with the stronger condition that the variables are [[statistical independence|independent]], but being uncorrelated suffices. So if all the variables have the same variance σ<sup>2</sup>, then, since division by ''n'' is a linear transformation, this formula immediately implies that the variance of their mean is
This statement is called the [[Irénée-Jules Bienaymé|Bienaymé]] formula<ref>[[Michel Loève|Loève, M.]] (1977) "Probability Theory", ''Graduate Texts in Mathematics'', Volume 45, 4th edition, Springer-Verlag, p.&nbsp;12.</ref> and was discovered in 1853.<ref>[[Irénée-Jules Bienaymé|Bienaymé, I.-J.]] (1853) "Considérations à l'appui de la découverte de Laplace sur la loi de probabilité dans la méthode des moindres carrés", ''Comptes rendus de l'Académie des sciences Paris'', 37, p.&nbsp;309–317; digital copy available [http://visualiseur.bnf.fr/CadresFenetre?O=NUMM-2994&I=313] {{webarchive |url=https://web.archive.org/web/20180623145935/http://visualiseur.bnf.fr/CadresFenetre?O=NUMM-2994&I=313 |date=2018-06-23 }}</ref><ref>[[Irénée-Jules Bienaymé|Bienaymé, I.-J.]] (1867) "Considérations à l'appui de la découverte de Laplace sur la loi de probabilité dans la méthode des moindres carrés", ''Journal de Mathématiques Pures et Appliquées, Série 2'', Tome 12, p.&nbsp;158–167; digital copy available [http://gallica.bnf.fr/ark:/12148/bpt6k16411c/f166.image.n19][http://sites.mathdoc.fr/JMPA/PDF/JMPA_1867_2_12_A10_0.pdf]</ref> It is often made with the stronger condition that the variables are [[statistical independence|independent]], but being uncorrelated suffices. So if all the variables have the same variance {{math|''σ''<sup>2</sup>}}, then, since division by {{math|''n''}} is a linear transformation, this formula immediately implies that the variance of their mean is
 
<math display="block">
<math display="block">
   \operatorname{Var}\left(\overline{X}\right) = \operatorname{Var}\left(\frac{1}{n} \sum_{i=1}^n X_i\right) =
   \operatorname{Var}\left(\overline{X}\right) = \operatorname{Var}\left(\frac{1}{n} \sum_{i=1}^n X_i\right) =
Line 303: Line 270:


To prove the initial statement, it suffices to show that
To prove the initial statement, it suffices to show that
<math display="block">\operatorname{Var}(X + Y) = \operatorname{Var}(X) + \operatorname{Var}(Y).</math>
<math display="block">\operatorname{Var}(X + Y) = \operatorname{Var}(X) + \operatorname{Var}(Y).</math>


The general result then follows by induction.  Starting with the definition,
The general result then follows by induction.  Starting with the definition,
<math display="block">\begin{align}
<math display="block">\begin{align}
   \operatorname{Var}(X + Y) &= \operatorname{E}\left[(X + Y)^2\right] - (\operatorname{E}[X + Y])^2 \\[5pt]
   \operatorname{Var}(X + Y) &= \operatorname{E}\left[(X + Y)^2\right] - (\operatorname{E}[X + Y])^2 \\[5pt]
Line 314: Line 279:


Using the linearity of the [[Expectation Operator|expectation operator]] and the assumption of independence (or uncorrelatedness) of ''X'' and ''Y'', this further simplifies as follows:
Using the linearity of the [[Expectation Operator|expectation operator]] and the assumption of independence (or uncorrelatedness) of ''X'' and ''Y'', this further simplifies as follows:
<math display="block">\begin{align}
<math display="block">\begin{align}
   \operatorname{Var}(X + Y) &= \operatorname{E}{\left[X^2\right]} + 2\operatorname{E}[XY] + \operatorname{E}{\left[Y^2\right]} - \left(\operatorname{E}[X]^2 + 2\operatorname{E}[X] \operatorname{E}[Y] + \operatorname{E}[Y]^2\right) \\[5pt]
   \operatorname{Var}(X + Y) &= \operatorname{E}{\left[X^2\right]} + 2\operatorname{E}[XY] + \operatorname{E}{\left[Y^2\right]} - \left(\operatorname{E}[X]^2 + 2\operatorname{E}[X] \operatorname{E}[Y] + \operatorname{E}[Y]^2\right) \\[5pt]
Line 321: Line 285:
\end{align}</math>
\end{align}</math>


====Sum of correlated variables====
==== Sum of correlated variables ====


=====Sum of correlated variables with fixed sample size=====
===== Sum of correlated variables with fixed sample size =====
{{main article|Bienaymé's identity}}
{{main article|Bienaymé's identity}}
In general, the variance of the sum of {{math|n}} variables is the sum of their [[covariance]]s:
In general, the variance of the sum of {{math|n}} variables is the sum of their [[covariance]]s:
<math display="block">\operatorname{Var}\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n \sum_{j=1}^n \operatorname{Cov}\left(X_i, X_j\right) = \sum_{i=1}^n \operatorname{Var}\left(X_i\right) + 2 \sum_{1 \leq i < j\leq n} \operatorname{Cov}\left(X_i, X_j\right).</math>
<math display="block">\operatorname{Var}\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n \sum_{j=1}^n \operatorname{Cov}\left(X_i, X_j\right) = \sum_{i=1}^n \operatorname{Var}\left(X_i\right) + 2 \sum_{1 \leq i < j\leq n} \operatorname{Cov}\left(X_i, X_j\right).</math>
 
(Note: The second equality comes from the fact that {{math|1=Cov(''X''<sub>''i''</sub>, ''X''<sub>''i''</sub>) = Var(''X''<sub>''i''</sub>)}}.)
(Note: The second equality comes from the fact that {{math|1=Cov(''X''<sub>''i''</sub>,''X''<sub>''i''</sub>) = Var(''X''<sub>''i''</sub>)}}.)


Here, <math>\operatorname{Cov}(\cdot,\cdot)</math> is the [[covariance]], which is zero for independent random variables (if it exists). The formula states that the variance of a sum is equal to the sum of all elements in the covariance matrix of the components. The next expression states equivalently that the variance of the sum is the sum of the diagonal of covariance matrix plus two times the sum of its upper triangular elements (or its lower triangular elements); this emphasizes that the covariance matrix is symmetric. This formula is used in the theory of [[Cronbach's alpha]] in [[classical test theory]].
Here, <math>\operatorname{Cov}(\cdot,\cdot)</math> is the [[covariance]], which is zero for independent random variables (if it exists). The formula states that the variance of a sum is equal to the sum of all elements in the covariance matrix of the components. The next expression states equivalently that the variance of the sum is the sum of the diagonal of covariance matrix plus two times the sum of its upper triangular elements (or its lower triangular elements); this emphasizes that the covariance matrix is symmetric. This formula is used in the theory of [[Cronbach's alpha]] in [[classical test theory]].


So, if the variables have equal variance ''σ''<sup>2</sup> and the average [[correlation]] of distinct variables is ''ρ'', then the variance of their mean is
So, if the variables have equal variance {{math|''σ''<sup>2</sup>}} and the average [[correlation]] of distinct variables is {{math|''ρ''}}, then the variance of their mean is
 
<math display="block">\operatorname{Var}\left(\overline{X}\right) = \frac{\sigma^2}{n} + \frac{n - 1}{n}\rho\sigma^2.</math>
<math display="block">\operatorname{Var}\left(\overline{X}\right) = \frac{\sigma^2}{n} + \frac{n - 1}{n}\rho\sigma^2.</math>


This implies that the variance of the mean increases with the average of the correlations. In other words, additional correlated observations are not as effective as additional independent observations at reducing the [[standard error|uncertainty of the mean]]. Moreover, if the variables have unit variance, for example if they are standardized, then this simplifies to
This implies that the variance of the mean increases with the average of the correlations. In other words, additional correlated observations are not as effective as additional independent observations at reducing the [[standard error|uncertainty of the mean]]. Moreover, if the variables have unit variance, for example if they are standardized, then this simplifies to
<math display="block">\operatorname{Var}\left(\overline{X}\right) = \frac{1}{n} + \frac{n - 1}{n}\rho.</math>
<math display="block">\operatorname{Var}\left(\overline{X}\right) = \frac{1}{n} + \frac{n - 1}{n}\rho.</math>


This formula is used in the [[Spearman–Brown prediction formula]] of classical test theory. This converges to ''ρ'' if ''n'' goes to infinity, provided that the average correlation remains constant or converges too. So for the variance of the mean of standardized variables with equal correlations or converging average correlation we have
This formula is used in the [[Spearman–Brown prediction formula]] of classical test theory. This converges to {{math|''ρ''}} if {{math|''n''}} goes to infinity, provided that the average correlation remains constant or converges too. So for the variance of the mean of standardized variables with equal correlations or converging average correlation we have
 
<math display="block">\lim_{n \to \infty} \operatorname{Var}\left(\overline{X}\right) = \rho.</math>
<math display="block">\lim_{n \to \infty} \operatorname{Var}\left(\overline{X}\right) = \rho.</math>


Therefore, the variance of the mean of a large number of standardized variables is approximately equal to their average correlation. This makes clear that the sample mean of correlated variables does not generally converge to the population mean, even though the [[law of large numbers]] states that the sample mean will converge for independent variables.
Therefore, the variance of the mean of a large number of standardized variables is approximately equal to their average correlation. This makes clear that the sample mean of correlated variables does not generally converge to the population mean, even though the [[law of large numbers]] states that the sample mean will converge for independent variables.


=====Sum of uncorrelated variables with random sample size=====
===== Sum of uncorrelated variables with random sample size =====
There are cases when a sample is taken without knowing, in advance, how many observations will be acceptable according to some criterion. In such cases, the sample size {{math|N}} is a random variable whose variation adds to the variation of {{math|X}}, such that,<ref>Cornell, J R, and Benjamin, C A, ''Probability, Statistics, and Decisions for Civil Engineers,'' McGraw-Hill, NY, 1970, pp.178-9.</ref>
There are cases when a sample is taken without knowing, in advance, how many observations will be acceptable according to some criterion. In such cases, the sample size {{math|''N''}} is a random variable whose variation adds to the variation of {{math|''X''}}, such that,<ref>Cornell, J R, and Benjamin, C A, ''Probability, Statistics, and Decisions for Civil Engineers'', McGraw-Hill, NY, 1970, pp. 178-9.</ref>
<math display="block">\operatorname{Var}\left(\sum_{i=1}^{N}X_i\right)=\operatorname{E}\left[N\right]\operatorname{Var}(X)+\operatorname{Var}(N)(\operatorname{E}\left[X\right])^2</math>
<math display="block">\operatorname{Var}\left(\sum_{i=1}^{N}X_i\right)=\operatorname{E}\left[N\right]\operatorname{Var}(X)+\operatorname{Var}(N)(\operatorname{E}\left[X\right])^2</math>
which follows from the [[law of total variance]].
which follows from the [[law of total variance]].


If {{math|N}} has a [[Poisson distribution]], then <math>\operatorname{E}[N]=\operatorname{Var}(N)</math> with estimator {{math|n}} = {{math|N}}. So, the estimator of <math>\operatorname{Var}\left(\sum_{i=1}^{n}X_i\right)</math> becomes <math>n{S_x}^2+n\bar{X}^2</math>, giving <math>\operatorname{SE}(\bar{X})=\sqrt{\frac{{S_x}^2+\bar{X}^2}{n}}</math>
If {{math|''N''}} has a [[Poisson distribution]], then <math>\operatorname{E}[N]=\operatorname{Var}(N)</math> with estimator {{math|1=''n'' = ''N''}}. So, the estimator of <math>\textstyle \operatorname{Var}\left(\sum_{i=1}^{n}X_i\right)</math> becomes {{tmath| \textstyle n{S_x}^2+n\bar{X}^2 }}, giving <math>\textstyle \operatorname{SE}(\bar{X})=\sqrt{{({S_x}^2+\bar{X}^2)}/{n}}</math>
(see [[Standard error#Standard_error_of_the_sample_mean|standard error of the sample mean]]).
(see ''{{slink|Standard error#Standard error of the sample mean}}'').


====Weighted sum of variables====
==== Weighted sum of variables ====
{{see also|Weighted arithmetic mean#Variance{{!}}Variance of a weighted arithmetic mean}}
{{see also|Weighted arithmetic mean#Variance{{!}}Variance of a weighted arithmetic mean}}
{{distinguish|Weighted variance}}
{{distinguish|Weighted variance}}


The scaling property and the Bienaymé formula, along with the property of the [[covariance]] {{math|Cov(''aX'',&nbsp;''bY'') {{=}} ''ab'' Cov(''X'',&nbsp;''Y'')}}  jointly imply that
The scaling property and the Bienaymé formula, along with the property of the [[covariance]] {{math|Cov(''aX'',&nbsp;''bY'') {{=}} ''ab'' Cov(''X'',&nbsp;''Y'')}}  jointly imply that
<math display="block">\operatorname{Var}(aX \pm bY) =a^2 \operatorname{Var}(X) + b^2 \operatorname{Var}(Y) \pm 2ab\, \operatorname{Cov}(X, Y).</math>
<math display="block">\operatorname{Var}(aX \pm bY) =a^2 \operatorname{Var}(X) + b^2 \operatorname{Var}(Y) \pm 2ab\, \operatorname{Cov}(X, Y).</math>


Line 366: Line 324:


The expression above can be extended to a weighted sum of multiple variables:
The expression above can be extended to a weighted sum of multiple variables:
<math display="block">\operatorname{Var}\left(\sum_{i}^n a_iX_i\right) = \sum_{i=1}^na_i^2 \operatorname{Var}(X_i) + 2\sum_{1\le i}\sum_{<j\le n}a_ia_j\operatorname{Cov}(X_i,X_j)</math>
<math display="block">\operatorname{Var}\left(\sum_{i}^n a_iX_i\right) = \sum_{i=1}^na_i^2 \operatorname{Var}(X_i) + 2\sum_{1\le i}\sum_{<j\le n}a_ia_j\operatorname{Cov}(X_i,X_j)</math>


===Product of variables===
=== Product of variables ===


====Product of independent variables====
==== Product of independent variables ====


If two variables X and Y are [[Independence (probability theory)|independent]], the variance of their product is given by<ref>{{Cite journal |last=Goodman |first=Leo A. |author-link=Leo Goodman |date=December 1960 |title=On the Exact Variance of Products |journal=Journal of the American Statistical Association |volume=55 |issue=292 |pages=708–713 |doi=10.2307/2281592 |jstor=2281592}}</ref>
If two variables X and Y are [[Independence (probability theory)|independent]], the variance of their product is given by<ref>{{Cite journal |last=Goodman |first=Leo A. |author-link=Leo Goodman |date=December 1960 |title=On the Exact Variance of Products |journal=Journal of the American Statistical Association |volume=55 |issue=292 |pages=708–713 |doi=10.2307/2281592 |jstor=2281592}}</ref>
Line 377: Line 334:


Equivalently, using the basic properties of expectation, it is given by
Equivalently, using the basic properties of expectation, it is given by
<math display="block">\operatorname{Var}(XY) = \operatorname{E}\left(X^2\right) \operatorname{E}\left(Y^2\right) - [\operatorname{E}(X)]^2 [\operatorname{E}(Y)]^2.</math>
<math display="block">\operatorname{Var}(XY) = \operatorname{E}\left(X^2\right) \operatorname{E}\left(Y^2\right) - [\operatorname{E}(X)]^2 [\operatorname{E}(Y)]^2.</math>


====Product of statistically dependent variables====
==== Product of statistically dependent variables ====


In general, if two variables are statistically dependent, then the variance of their product is given by:
In general, if two variables are statistically dependent, then the variance of their product is given by:
Line 391: Line 347:
\end{align}</math>
\end{align}</math>


===Arbitrary functions===
=== Arbitrary functions ===
{{main|Uncertainty propagation}}
{{main|Uncertainty propagation}}


The [[delta method]] uses second-order [[Taylor expansion]]s to approximate the variance of a function of one or more random variables: see [[Taylor expansions for the moments of functions of random variables]]. For example, the approximate variance of a function of one variable is given by
The [[delta method]] uses second-order [[Taylor expansion]]s to approximate the variance of a function of one or more random variables (see ''[[Taylor expansions for the moments of functions of random variables]]''). For example, the approximate variance of a function of one variable is given by
 
<math display="block">\operatorname{Var}\left[f(X)\right] \approx \left(f'(\operatorname{E}\left[X\right])\right)^2\operatorname{Var}\left[X\right]</math>
<math display="block">\operatorname{Var}\left[f(X)\right] \approx \left(f'(\operatorname{E}\left[X\right])\right)^2\operatorname{Var}\left[X\right]</math>
provided that {{math|''f''}} is twice differentiable and that the mean and variance of {{math|''X''}} are finite.


provided that ''f'' is twice differentiable and that the mean and variance of ''X'' are finite.
== Population variance and sample variance ==
 
==Population variance and sample variance==
{{anchor|Estimation}}
{{anchor|Estimation}}
{{see also|Unbiased estimation of standard deviation}}
{{see also|Unbiased estimation of standard deviation}}
Real-world observations such as the measurements of yesterday's rain throughout the day typically cannot be complete sets of all possible observations that could be made. As such, the variance calculated from the finite set will in general not match the variance that would have been calculated from the full population of possible observations. This means that one [[Estimation theory|estimates]] the mean and variance from a limited set of observations by using an [[estimator]] equation. The estimator is a function of the [[Sample (statistics)|sample]] of ''n'' [[observations]] drawn without observational bias from the whole [[Statistical population|population]] of potential observations. In this example, the sample would be the set of actual measurements of yesterday's rainfall from available rain gauges within the geography of interest.
Real-world observations such as the measurements of yesterday's rain throughout the day typically cannot be complete sets of all possible observations that could be made. As such, the variance calculated from the finite set will in general not match the variance that would have been calculated from the full population of possible observations. This means that one [[Estimation theory|estimates]] the mean and variance from a limited set of observations by using an [[estimator]] equation. The estimator is a function of the [[Sample (statistics)|sample]] of {{math|''n''}} [[observations]] drawn without observational bias from the whole [[Statistical population|population]] of potential observations. In this example, the sample would be the set of actual measurements of yesterday's rainfall from available rain gauges within the geography of interest.


The simplest estimators for population mean and population variance are simply the mean and variance of the sample, the '''sample mean''' and '''(uncorrected) sample variance''' – these are [[consistent estimator]]s (they converge to the value of the whole population as the number of samples increases) but can be improved. Most simply, the sample variance is computed as the sum of [[squared deviations]] about the (sample) mean, divided by ''n'' as the number of samples''.'' However, using values other than ''n'' improves the estimator in various ways. Four common values for the denominator are ''n,'' ''n''&nbsp;&nbsp;1, ''n''&nbsp;+&nbsp;1, and ''n''&nbsp;&nbsp;1.5: ''n'' is the simplest (the variance of the sample), ''n''&nbsp;&nbsp;1 eliminates bias,<ref name=bessel /> ''n''&nbsp;+&nbsp;1 minimizes [[mean squared error]] for the normal distribution,<ref name=Kourouklis/> and ''n''&nbsp;&nbsp;1.5 mostly eliminates bias in [[unbiased estimation of standard deviation]] for the normal distribution.<ref>{{cite journal | last = Brugger | first = R. M. | year = 1969 | title = A Note on Unbiased Estimation of the Standard Deviation | journal = The American Statistician | volume = 23 | issue = 4 | pages = 32 | doi = 10.1080/00031305.1969.10481865 }}</ref>
The simplest estimators for population mean and population variance are simply the mean and variance of the sample, the '''sample mean''' and '''(uncorrected) sample variance''' – these are [[consistent estimator]]s (they converge to the value of the whole population as the number of samples increases) but can be improved. Most simply, the sample variance is computed as the sum of [[squared deviations]] about the (sample) mean, divided by {{math|''n''}} as the number of samples. However, using values other than {{math|''n''}} improves the estimator in various ways. Four common values for the denominator are {{math|''n''}}, {{math|''n'' − 1}}, {{math|''n'' + 1}}, and {{math|''n'' − 1.5}}: {{math|''n''}} is the simplest (the variance of the sample), {{math|''n'' − 1}} eliminates bias,<ref name=bessel /> {{math|''n'' + 1}} minimizes [[mean squared error]] for the normal distribution,<ref name=Kourouklis/> and {{math|''n'' − 1.5}} mostly eliminates bias in [[unbiased estimation of standard deviation]] for the normal distribution.<ref>{{cite journal | last = Brugger | first = R. M. | year = 1969 | title = A Note on Unbiased Estimation of the Standard Deviation | journal = The American Statistician | volume = 23 | issue = 4 | pages = 32 | doi = 10.1080/00031305.1969.10481865 }}</ref>


Firstly, if the true population mean is unknown, then the sample variance (which uses the sample mean in place of the true mean) is a [[biased estimator]]: it underestimates the variance by a factor of (''n''&nbsp;&nbsp;1) / ''n''; correcting this factor, resulting in the sum of squared deviations about the sample mean divided by ''n'' -1 instead of ''n'', is called ''[[Bessel's correction]]''.<ref name=bessel>{{cite book | last = Reichmann | first = W. J. | title = Use and Abuse of Statistics | publisher = Methuen | year = 1961 | edition = Reprinted 1964–1970 by Pelican | location = London | chapter = Appendix 8 }}</ref> The resulting estimator is unbiased and is called the '''(corrected) sample variance''' or '''unbiased sample variance'''. If the mean is determined in some other way than from the same samples used to estimate the variance, then this bias does not arise, and the variance can safely be estimated as that of the samples about the (independently known) mean.
Firstly, if the true population mean is unknown, then the sample variance (which uses the sample mean in place of the true mean) is a [[biased estimator]]: it underestimates the variance by a factor of {{math|(''n'' − 1) / ''n''}}; correcting this factor, resulting in the sum of squared deviations about the sample mean divided by {{math|''n'' 1}} instead of {{math|''n''}}, is called ''[[Bessel's correction]]''.<ref name=bessel>{{cite book | last = Reichmann | first = W. J. | title = Use and Abuse of Statistics | publisher = Methuen | year = 1961 | edition = Reprinted 1964–1970 by Pelican | location = London | chapter = Appendix 8 }}</ref> The resulting estimator is unbiased and is called the '''(corrected) sample variance''' or '''unbiased sample variance'''. If the mean is determined in some other way than from the same samples used to estimate the variance, then this bias does not arise, and the variance can safely be estimated as that of the samples about the (independently known) mean.


Secondly, the sample variance does not generally minimize [[mean squared error]] between sample variance and population variance. Correcting for bias often makes this worse: one can always choose a scale factor that performs better than the corrected sample variance, though the optimal scale factor depends on the [[excess kurtosis]] of the population (see [[Mean squared error#Variance|mean squared error: variance]]) and introduces bias. This always consists of scaling down the unbiased estimator (dividing by a number larger than ''n''&nbsp;&nbsp;1) and is a simple example of a [[shrinkage estimator]]: one "shrinks" the unbiased estimator towards zero. For the normal distribution, dividing by ''n''&nbsp;+&nbsp;1 (instead of ''n''&nbsp;&nbsp;1 or ''n'') minimizes mean squared error.<ref name=Kourouklis>{{Cite journal |last=Kourouklis |first=Stavros |date=2012 |title=A New Estimator of the Variance Based on Minimizing Mean Squared Error |url=https://www.jstor.org/stable/23339501 |journal=The American Statistician |volume=66 |issue=4 |pages=234–236 |doi=10.1080/00031305.2012.735209 |jstor=23339501 |issn=0003-1305|url-access=subscription }}</ref> The resulting estimator is biased, however, and is known as the '''biased sample variation'''.
Secondly, the sample variance does not generally minimize [[mean squared error]] between sample variance and population variance. Correcting for bias often makes this worse: one can always choose a scale factor that performs better than the corrected sample variance, though the optimal scale factor depends on the [[excess kurtosis]] of the population (see ''{{slink|Mean squared error#Variance}}'') and introduces bias. This always consists of scaling down the unbiased estimator (dividing by a number larger than {{math|''n'' − 1}}) and is a simple example of a [[shrinkage estimator]]: one "shrinks" the unbiased estimator towards zero. For the normal distribution, dividing by {{math|''n'' + 1}} (instead of {{math|''n'' − 1}} or {{math|''n''}}) minimizes mean squared error.<ref name=Kourouklis>{{cite journal |last=Kourouklis |first=Stavros |date=2012 |title=A New Estimator of the Variance Based on Minimizing Mean Squared Error |url=https://www.jstor.org/stable/23339501 |journal=The American Statistician |volume=66 |issue=4 |pages=234–236 |doi=10.1080/00031305.2012.735209 |jstor=23339501 |issn=0003-1305|url-access=subscription }}</ref> The resulting estimator is biased, however, and is known as the '''biased sample variation'''.


===Population variance===
=== Population variance ===
In general, the '''''population variance''''' of a ''finite'' [[statistical population|population]] of size {{mvar|N}} with values {{math|''x''<sub>''i''</sub>}} is given by
In general, the '''''population variance''''' of a ''finite'' [[statistical population|population]] of size {{mvar|N}} with values {{math|''x''<sub>''i''</sub>}} is given by
<math display="block">\begin{align}
<math display="block">\begin{align}
Line 419: Line 373:
           &= \operatorname{E}[x_i^2] - \mu^2  
           &= \operatorname{E}[x_i^2] - \mu^2  
\end{align}</math>
\end{align}</math>
where the population mean is <math display="inline">\mu = \operatorname{E}[x_i] = \frac 1N \sum_{i=1}^N x_i </math> and {{tmath|1= \textstyle \operatorname{E}[x_i^2] = \left(\frac{1}{N} \sum_{i=1}^N x_i^2\right) }}, where <math display="inline">\operatorname{E} </math> is the [[Expected value|expectation value]] operator.


where the population mean is <math display="inline">\mu = \operatorname{E}[x_i] = \frac 1N \sum_{i=1}^N x_i </math> and <math display="inline">\operatorname{E}[x_i^2] = \left(\frac{1}{N} \sum_{i=1}^N x_i^2\right)  </math>, where <math display="inline">\operatorname{E} </math> is the [[Expected value|expectation value]] operator.
The population variance can also be computed using<ref>{{cite conference |author=Yuli Zhang |author2=Huaiyu Wu |author3=Lei Cheng |date=June 2012 |title=Some new deformation formulas about variance and covariance|conference=Proceedings of 4th International Conference on Modelling, Identification and Control(ICMIC2012) |pages=987–992}}</ref>
 
The population variance can also be computed using<ref>{{cite conference|author=Yuli Zhang |author2=Huaiyu Wu |author3=Lei Cheng |date=June 2012|title=Some new deformation formulas about variance and covariance|conference=Proceedings of 4th International Conference on Modelling, Identification and Control(ICMIC2012)|pages=987–992}}</ref>
 
<math display="block">\sigma^2 = \frac {1} {N^2}\sum_{i<j}\left( x_i-x_j \right)^2 = \frac{1}{2N^2} \sum_{i, j=1}^N\left( x_i-x_j \right)^2.</math>
<math display="block">\sigma^2 = \frac {1} {N^2}\sum_{i<j}\left( x_i-x_j \right)^2 = \frac{1}{2N^2} \sum_{i, j=1}^N\left( x_i-x_j \right)^2.</math>


Line 437: Line 389:
The population variance matches the variance of the generating probability distribution. In this sense, the concept of population can be extended to continuous random variables with infinite populations.
The population variance matches the variance of the generating probability distribution. In this sense, the concept of population can be extended to continuous random variables with infinite populations.


===Sample variance===
=== Sample variance ===
{{see also|Sample standard deviation}}
{{see also|Sample standard deviation}}


===={{visible anchor|Biased sample variance}}====
==== Biased sample variance ====
In many practical situations, the true variance of a population is not known ''a priori'' and must be computed somehow. When dealing with extremely large populations, it is not possible to count every object in the population, so the computation must be performed on a [[sample (statistics)|sample]] of the population.<ref>{{cite book | last = Navidi | first = William | year = 2006 | title = Statistics for Engineers and Scientists | publisher = McGraw-Hill | page = 14 }}</ref> This is generally referred to as '''sample variance''' or '''empirical variance'''. Sample variance can also be applied to the estimation of the variance of a continuous distribution from a sample of that distribution.
In many practical situations, the true variance of a population is not known ''a priori'' and must be computed somehow. When dealing with extremely large populations, it is not possible to count every object in the population, so the computation must be performed on a [[sample (statistics)|sample]] of the population.<ref>{{cite book | last = Navidi | first = William | year = 2006 | title = Statistics for Engineers and Scientists | publisher = McGraw-Hill | page = 14 }}</ref> This is generally referred to as '''sample variance''' or '''empirical variance'''. Sample variance can also be applied to the estimation of the variance of a continuous distribution from a sample of that distribution.


We take a [[statistical sample|sample with replacement]] of {{mvar|n}} values {{math|''Y''<sub>1</sub>, ..., ''Y''<sub>''n''</sub>}} from the population of size {{mvar|N}}, where {{math|''n'' < ''N''}}, and estimate the variance on the basis of this sample.<ref>Montgomery, D. C. and Runger, G. C. (1994) ''Applied statistics and probability for engineers'', page 201. John Wiley & Sons New York</ref> Directly taking the variance of the sample data gives the average of the [[squared deviations]]:<ref>{{cite conference | author1 = Yuli Zhang | author2 = Huaiyu Wu | author3 = Lei Cheng | date = June 2012 | title = Some new deformation formulas about variance and covariance | conference = Proceedings of 4th International Conference on Modelling, Identification and Control(ICMIC2012) | pages = 987–992 }}</ref>
We take a [[statistical sample|sample with replacement]] of {{mvar|n}} values {{math|''Y''<sub>1</sub>, ..., ''Y''<sub>''n''</sub>}} from the population of size {{mvar|N}}, where {{math|''n'' < ''N''}}, and estimate the variance on the basis of this sample.<ref>Montgomery, D. C. and Runger, G. C. (1994) ''Applied statistics and probability for engineers'', page 201. John Wiley & Sons New York</ref> Directly taking the variance of the sample data gives the average of the [[squared deviations]]:<ref>{{cite conference | author1 = Yuli Zhang | author2 = Huaiyu Wu | author3 = Lei Cheng | date = June 2012 | title = Some new deformation formulas about variance and covariance | conference = Proceedings of 4th International Conference on Modelling, Identification and Control(ICMIC2012) | pages = 987–992 }}</ref>
<math display="block">\tilde{S}_Y^2 =
<math display="block">\tilde{S}_Y^2 =
   \frac{1}{n} \sum_{i=1}^n \left(Y_i - \overline{Y}\right)^2 = \left(\frac 1n \sum_{i=1}^n Y_i^2\right) - \overline{Y}^2 =
   \frac{1}{n} \sum_{i=1}^n \left(Y_i - \overline{Y}\right)^2 = \left(\frac 1n \sum_{i=1}^n Y_i^2\right) - \overline{Y}^2 =
   \frac{1}{n^2} \sum_{i,j\,:\,i<j}\left(Y_i - Y_j\right)^2.
   \frac{1}{n^2} \sum_{i,j\,:\,i<j}\left(Y_i - Y_j\right)^2.
</math>
</math>
 
(See the section ''{{slink|#Population variance}}'' for the derivation of this formula.) Here, <math>\overline{Y}</math> denotes the [[sample mean]]:  
(See the section [[Variance#Population variance|Population variance]] for the derivation of this formula.) Here, <math>\overline{Y}</math> denotes the [[sample mean]]:  
<math display="block">\overline{Y} = \frac{1}{n} \sum_{i=1}^n Y_i .</math>
<math display="block">\overline{Y} = \frac{1}{n} \sum_{i=1}^n Y_i .</math>


Since the {{math|''Y''<sub>''i''</sub>}} are selected randomly, both <math>\overline{Y}</math> and <math>\tilde{S}_Y^2</math> are [[Random variable|random variables]]. Their expected values can be evaluated by averaging over the ensemble of all possible samples {{math|{''Y''<sub>''i''</sub>}<nowiki/>}} of size {{mvar|n}} from the population. For <math>\tilde{S}_Y^2</math> this gives:
Since the {{math|''Y''<sub>''i''</sub>}} are selected randomly, both <math>\overline{Y}</math> and <math>\tilde{S}_Y^2</math> are [[Random variable|random variables]]. Their expected values can be evaluated by averaging over the ensemble of all possible samples {{math|{{mset|''Y''<sub>''i''</sub>}}}} of size {{mvar|n}} from the population. For <math>\tilde{S}_Y^2</math> this gives:
<math display="block">\begin{align}
<math display="block">\begin{align}
   \operatorname{E}[\tilde{S}_Y^2]
   \operatorname{E}[\tilde{S}_Y^2]
Line 464: Line 414:
\end{align}</math>
\end{align}</math>


Here <math display="inline">\sigma^2 = \operatorname{E}[Y_i^2] - \mu^2 </math> derived in the section is [[Variance#Population variance|population variance]] and <math display="inline">\operatorname{E}[Y_i Y_j] = \operatorname{E}[Y_i] \operatorname{E}[Y_j] = \mu^2</math> due to independency of <math display="inline">Y_i</math> and <math display="inline">Y_j</math>.
Here <math display="inline">\sigma^2 = \operatorname{E}[Y_i^2] - \mu^2 </math> derived in the section is [[Variance#Population variance|population variance]] and <math display="inline">\operatorname{E}[Y_i Y_j] = \operatorname{E}[Y_i] \operatorname{E}[Y_j] = \mu^2</math> due to independency of <math>Y_i</math> and {{tmath| Y_j }}.


Hence <math display="inline">\tilde{S}_Y^2</math> gives an estimate of the population variance <math display="inline">\sigma^2</math> that is biased by a factor of <math display="inline">\frac{n - 1}{n}</math> because the expectation value of <math display="inline">\tilde{S}_Y^2</math> is smaller than the population variance (true variance) by that factor. For this reason, <math display="inline">\tilde{S}_Y^2</math> is referred to as the ''biased sample variance''.
Hence <math display="inline">\tilde{S}_Y^2</math> gives an estimate of the population variance <math display="inline">\sigma^2</math> that is biased by a factor of <math display="inline">\frac{n - 1}{n}</math> because the expectation value of <math display="inline">\tilde{S}_Y^2</math> is smaller than the population variance (true variance) by that factor. For this reason, <math display="inline">\tilde{S}_Y^2</math> is referred to as the ''biased sample variance''.


===={{visible anchor|Unbiased sample variance}}====
==== Unbiased sample variance ====
Correcting for this bias yields the ''unbiased sample variance'', denoted <math>S^2</math>:
Correcting for this bias yields the ''unbiased sample variance'', denoted {{tmath| S^2 }}:
 
<math display="block">S^2 = \frac{n}{n - 1} \tilde{S}_Y^2 = \frac{n}{n - 1} \left[ \frac{1}{n} \sum_{i=1}^n \left(Y_i - \overline{Y}\right)^2 \right] = \frac{1}{n - 1} \sum_{i=1}^n \left(Y_i - \overline{Y} \right)^2</math>
<math display="block">S^2 = \frac{n}{n - 1} \tilde{S}_Y^2 = \frac{n}{n - 1} \left[ \frac{1}{n} \sum_{i=1}^n \left(Y_i - \overline{Y}\right)^2 \right] = \frac{1}{n - 1} \sum_{i=1}^n \left(Y_i - \overline{Y} \right)^2</math>


Line 480: Line 429:


===== Example =====
===== Example =====
For a set of numbers {10, 15, 30, 45, 57, 52, 63, 72, 81, 93, 102, 105}, if this set is the whole data population for some measurement, then variance is the population variance 932.743 as the sum of the squared deviations about the mean of this set, divided by 12 as the number of the set members. If the set is a sample from the whole population, then the unbiased sample variance can be calculated as 1017.538 that is the sum of the squared deviations about the mean of the sample, divided by 11 instead of 12. A function VAR.S in [[Microsoft Excel]] gives the unbiased sample variance while VAR.P is for population variance.
For a set of numbers {{mset|10, 15, 30, 45, 57, 52, 63, 72, 81, 93, 102, 105}}, if this set is the whole data population for some measurement, then variance is the population variance 932.743 as the sum of the squared deviations about the mean of this set, divided by 12 as the number of the set members. If the set is a sample from the whole population, then the unbiased sample variance can be calculated as 1017.538 that is the sum of the squared deviations about the mean of the sample, divided by 11 instead of 12. A function VAR.S in [[Microsoft Excel]] gives the unbiased sample variance while VAR.P is for population variance.


====Distribution of the sample variance====
==== Distribution of the sample variance ====
{{multiple image
{{multiple image
<!-- Essential parameters -->
<!-- Essential parameters -->
Line 506: Line 455:
(n - 1) \frac{S^2}{\sigma^2} \sim \chi^2_{n-1}  
(n - 1) \frac{S^2}{\sigma^2} \sim \chi^2_{n-1}  
</math>
</math>
where {{math|''σ''<sup>2</sup>}} is the [[Variance#Population variance|population variance]]. As a direct consequence, it follows that  
where {{math|''σ''<sup>2</sup>}} is the [[Variance#Population variance|population variance]]. As a direct consequence, it follows that  
<math display="block">
<math display="block">
\operatorname{E}\left(S^2\right) = \operatorname{E}\left(\frac{\sigma^2}{n - 1} \chi^2_{n-1}\right) = \sigma^2 ,
\operatorname{E}\left(S^2\right) = \operatorname{E}\left(\frac{\sigma^2}{n - 1} \chi^2_{n-1}\right) = \sigma^2 ,
</math>
</math>
and<ref>{{cite book | author1-last = Casella | author1-first = George | author2-last = Berger | author2-first = Roger L. | year = 2002 | title = Statistical Inference | at = Example 7.3.3, p.&nbsp;331 | edition = 2nd | isbn = 0-534-24312-6 }}</ref>
and<ref>{{cite book | author1-last = Casella | author1-first = George | author2-last = Berger | author2-first = Roger L. | year = 2002 | title = Statistical Inference | at = Example 7.3.3, p.&nbsp;331 | edition = 2nd | isbn = 0-534-24312-6 }}</ref>
<math display="block">
<math display="block">
  \operatorname{Var}\left[S^2\right] = \operatorname{Var}\left(\frac{\sigma^2}{n - 1} \chi^2_{n-1}\right) = \frac{\sigma^4}{{\left(n - 1\right)}^2} \operatorname{Var}\left(\chi^2_{n-1}\right) = \frac{2\sigma^4}{n - 1}.
  \operatorname{Var}\left[S^2\right] = \operatorname{Var}\left(\frac{\sigma^2}{n - 1} \chi^2_{n-1}\right) = \frac{\sigma^4}{{\left(n - 1\right)}^2} \operatorname{Var}\left(\chi^2_{n-1}\right) = \frac{2\sigma^4}{n - 1}.
</math>
</math>


If ''Y''<sub>''i''</sub> are independent and identically distributed, but not necessarily normally distributed, then<ref>Mood, A. M., Graybill, F. A., and Boes, D.C. (1974) ''Introduction to the Theory of Statistics'', 3rd Edition, McGraw-Hill, New York, p. 229</ref>
If {{math|''Y''<sub>''i''</sub>}} are independent and identically distributed, but not necessarily normally distributed, then<ref>Mood, A. M., Graybill, F. A., and Boes, D.C. (1974) ''Introduction to the Theory of Statistics'', 3rd Edition, McGraw-Hill, New York, p. 229</ref>
 
<math display="block">
<math display="block">
     \operatorname{E}\left[S^2\right] = \sigma^2, \quad
     \operatorname{E}\left[S^2\right] = \sigma^2, \quad
     \operatorname{Var}\left[S^2\right] = \frac{\sigma^4}{n} \left(\kappa - 1 + \frac{2}{n - 1} \right) = \frac{1}{n} \left(\mu_4 - \frac{n - 3}{n - 1}\sigma^4\right),
     \operatorname{Var}\left[S^2\right] = \frac{\sigma^4}{n} \left(\kappa - 1 + \frac{2}{n - 1} \right) = \frac{1}{n} \left(\mu_4 - \frac{n - 3}{n - 1}\sigma^4\right),
</math>
</math>
where ''κ'' is the [[kurtosis]] of the distribution and {{math|''μ''<sub>4</sub>}} is the fourth [[central moment]].


where ''κ'' is the [[kurtosis]] of the distribution and ''μ''<sub>4</sub> is the fourth [[central moment]].
If the conditions of the [[law of large numbers]] hold for the squared observations, {{math|''S''<sup>2</sup>}} is a [[consistent estimator]] of&nbsp;{{math|''σ''<sup>2</sup>}}. One can see indeed that the variance of the estimator tends asymptotically to zero.  An asymptotically equivalent formula was given in Kenney and Keeping (1951:164), Rose and Smith (2002:264), and Weisstein (n.d.).<ref>{{cite book |last1=Kenney |first1=John F. |last2=Keeping |first2=E.S. |date=1951 |title=Mathematics of Statistics. Part Two. |edition=2nd |publisher=D. Van Nostrand Company, Inc. |location=Princeton, New Jersey |url=http://krishikosh.egranth.ac.in/bitstream/1/2025521/1/G2257.pdf |via=KrishiKosh |url-status=dead |archive-url=https://web.archive.org/web/20181117022434/http://krishikosh.egranth.ac.in/bitstream/1/2025521/1/G2257.pdf |archive-date= Nov 17, 2018 }}</ref><ref>Rose, Colin; Smith, Murray D. (2002). "[http://www.mathstatica.com/book/Mathematical_Statistics_with_Mathematica.pdf Mathematical Statistics with Mathematica]". Springer-Verlag, New York.</ref><ref>Weisstein, Eric W. "[http://mathworld.wolfram.com/SampleVarianceDistribution.html Sample Variance Distribution]". MathWorld Wolfram.</ref>
 
If the conditions of the [[law of large numbers]] hold for the squared observations, ''S''<sup>2</sup> is a [[consistent estimator]] of&nbsp;''σ''<sup>2</sup>. One can see indeed that the variance of the estimator tends asymptotically to zero.  An asymptotically equivalent formula was given in Kenney and Keeping (1951:164), Rose and Smith (2002:264), and Weisstein (n.d.).<ref>{{cite book |last1=Kenney |first1=John F. |last2=Keeping |first2=E.S. |date=1951 |title=Mathematics of Statistics. Part Two. |edition=2nd |publisher=D. Van Nostrand Company, Inc. |location=Princeton, New Jersey |url=http://krishikosh.egranth.ac.in/bitstream/1/2025521/1/G2257.pdf |via=KrishiKosh |url-status=dead |archive-url=https://web.archive.org/web/20181117022434/http://krishikosh.egranth.ac.in/bitstream/1/2025521/1/G2257.pdf |archive-date= Nov 17, 2018 }}</ref><ref>Rose, Colin; Smith, Murray D. (2002). "[http://www.mathstatica.com/book/Mathematical_Statistics_with_Mathematica.pdf Mathematical Statistics with Mathematica]". Springer-Verlag, New York.</ref><ref>Weisstein, Eric W. "[http://mathworld.wolfram.com/SampleVarianceDistribution.html Sample Variance Distribution]". MathWorld Wolfram.</ref>


====Samuelson's inequality====
==== Samuelson's inequality ====


[[Samuelson's inequality]] is a result that states bounds on the values that individual observations in a sample can take, given that the sample mean and (biased) variance have been calculated.<ref>{{cite journal |last=Samuelson |first=Paul |title=How Deviant Can You Be? |journal=[[Journal of the American Statistical Association]] |volume=63 |issue=324 |year=1968 |pages=1522–1525 |jstor=2285901 |doi=10.1080/01621459.1968.10480944}}</ref> Values must lie within the limits <math>\bar y \pm \sigma_Y (n-1)^{1/2}.</math>
[[Samuelson's inequality]] is a result that states bounds on the values that individual observations in a sample can take, given that the sample mean and (biased) variance have been calculated.<ref>{{cite journal |last=Samuelson |first=Paul |title=How Deviant Can You Be? |journal=[[Journal of the American Statistical Association]] |volume=63 |issue=324 |year=1968 |pages=1522–1525 |jstor=2285901 |doi=10.1080/01621459.1968.10480944}}</ref> Values must lie within the limits {{tmath| \bar y \pm \sigma_Y (n-1)^{1/2} }}.


===Relations with the harmonic and arithmetic means===
=== Relations with the harmonic and arithmetic means ===
 
It has been shown<ref>{{cite journal |first=A. McD. |last=Mercer |title=Bounds for A–G, A–H, G–H, and a family of inequalities of Ky Fan's type, using a general method |journal=J. Math. Anal. Appl. |volume=243 |issue=1 |pages=163–173 |year=2000 |doi=10.1006/jmaa.1999.6688 |doi-access=free }}</ref> that for a sample {''y''<sub>''i''</sub>} of positive real numbers,


It has been shown<ref>{{cite journal |first=A. McD. |last=Mercer |title=Bounds for A–G, A–H, G–H, and a family of inequalities of Ky Fan's type, using a general method |journal=J. Math. Anal. Appl. |volume=243 |issue=1 |pages=163–173 |year=2000 |doi=10.1006/jmaa.1999.6688 |doi-access=free }}</ref> that for a sample {{math|{{mset|''y''<sub>''i''</sub>}}}} of positive real numbers,
<math display="block">  \sigma_y^2 \le 2y_{\max} (A - H), </math>
<math display="block">  \sigma_y^2 \le 2y_{\max} (A - H), </math>
where {{math|''y''<sub>max</sub>}} is the maximum of the sample, {{mvar|A}} is the arithmetic mean, {{mvar|H}} is the [[harmonic mean]] of the sample and <math>\sigma_y^2</math> is the (biased) variance of the sample.
where {{math|''y''<sub>max</sub>}} is the maximum of the sample, {{mvar|A}} is the arithmetic mean, {{mvar|H}} is the [[harmonic mean]] of the sample and <math>\sigma_y^2</math> is the (biased) variance of the sample.


This bound has been improved, and it is known that variance is bounded by
This bound has been improved, and it is known that variance is bounded by
<math display="block">\begin{align}
<math display="block">\begin{align}
  \sigma_y^2 &\le \frac{y_{\max} (A - H)(y_\max - A)}{y_\max - H}, \\[1ex]
  \sigma_y^2 &\le \frac{y_{\max} (A - H)(y_\max - A)}{y_\max - H}, \\[1ex]
  \sigma_y^2 &\ge \frac{y_{\min} (A - H)(A - y_\min)}{H - y_\min},
  \sigma_y^2 &\ge \frac{y_{\min} (A - H)(A - y_\min)}{H - y_\min},
\end{align} </math>
\end{align} </math>
where {{math|''y''<sub>min</sub>}} is the minimum of the sample.<ref name=Sharma2008>{{cite journal |first=R. |last=Sharma |title= Some more inequalities for arithmetic mean, harmonic mean and variance|journal= Journal of Mathematical Inequalities|volume=2 |issue=1 |pages=109–114 |year=2008 |doi=10.7153/jmi-02-11|citeseerx=10.1.1.551.9397 }}</ref>
where {{math|''y''<sub>min</sub>}} is the minimum of the sample.<ref name=Sharma2008>{{cite journal |first=R. |last=Sharma |title= Some more inequalities for arithmetic mean, harmonic mean and variance|journal= Journal of Mathematical Inequalities|volume=2 |issue=1 |pages=109–114 |year=2008 |doi=10.7153/jmi-02-11|citeseerx=10.1.1.551.9397 }}</ref>


==Tests of equality of variances==
== Tests of equality of variances ==


The [[F-test of equality of variances]] and the [[chi square test]]s are adequate when the sample is normally distributed. Non-normality makes testing for the equality of two or more variances more difficult.  
The [[F-test of equality of variances]] and the [[chi square test]]s are adequate when the sample is normally distributed. Non-normality makes testing for the equality of two or more variances more difficult.  
Line 560: Line 500:
Resampling methods, which include the [[Bootstrapping (statistics)|bootstrap]] and the [[Resampling (statistics)|jackknife]], may be used to test the equality of variances.
Resampling methods, which include the [[Bootstrapping (statistics)|bootstrap]] and the [[Resampling (statistics)|jackknife]], may be used to test the equality of variances.


==Moment of inertia==
== Moment of inertia ==
{{see also|Moment (physics)#Examples}}
{{see also|Moment (physics)#Examples}}
The variance of a probability distribution is analogous to the  [[moment of inertia]] in [[classical mechanics]] of a corresponding mass distribution along a line, with respect to rotation about its center of mass.<ref name=pearson>{{Cite web |last=Magnello |first=M. Eileen |title=Karl Pearson and the Origins of Modern Statistics: An Elastician becomes a Statistician | url=https://rutherfordjournal.org/article010107.html |website=The Rutherford Journal}}</ref>  It is because of this analogy that such things as the variance are called ''[[moment (mathematics)|moment]]s'' of [[probability distribution]]s.<ref name=pearson/> The covariance matrix is related to the [[moment of inertia tensor]] for multivariate distributions. The moment of inertia of a cloud of ''n'' points with a covariance matrix of <math>\Sigma</math> is given by{{Citation needed|date=February 2012}}
The variance of a probability distribution is analogous to the  [[moment of inertia]] in [[classical mechanics]] of a corresponding mass distribution along a line, with respect to rotation about its center of mass.<ref name=pearson>{{Cite web |last=Magnello |first=M. Eileen |title=Karl Pearson and the Origins of Modern Statistics: An Elastician becomes a Statistician | url=https://rutherfordjournal.org/article010107.html |website=The Rutherford Journal}}</ref>  It is because of this analogy that such things as the variance are called ''[[moment (mathematics)|moment]]s'' of [[probability distribution]]s.<ref name=pearson/> The covariance matrix is related to the [[moment of inertia tensor]] for multivariate distributions. The moment of inertia of a cloud of ''n'' points with a covariance matrix of <math>\Sigma</math> is given by{{Citation needed|date=February 2012}}
Line 571: Line 511:
<math display="block">I = n\begin{bmatrix} 0.2 & 0 & 0 \\ 0 & 10.1 & 0 \\ 0 & 0 & 10.1 \end{bmatrix}.</math>
<math display="block">I = n\begin{bmatrix} 0.2 & 0 & 0 \\ 0 & 10.1 & 0 \\ 0 & 0 & 10.1 \end{bmatrix}.</math>


==Semivariance==
== Semivariance ==


The ''semivariance'' is calculated in the same manner as the variance but only those observations that fall below the mean are included in the calculation:
The ''semivariance'' is calculated in the same manner as the variance but only those observations that fall below the mean are included in the calculation:
<math display="block">\text{Semivariance} = \frac{1}{n} \sum_{i:x_i < \mu} {\left(x_i - \mu\right)}^2</math>
<math display="block">\text{Semivariance} = \frac{1}{n} \sum_{i:x_i < \mu} {\left(x_i - \mu\right)}^2</math>
It is also described as a specific measure in different fields of application. For skewed distributions, the semivariance can provide additional information that a variance does not.<ref>{{Cite web|url=https://famafrench.dimensional.com/questions-answers/qa-semi-variance-a-better-risk-measure.aspx|title=Q&A: Semi-Variance: A Better Risk Measure?|last1=Fama|first1=Eugene F.|last2=French|first2=Kenneth R.|date=2010-04-21|website=Fama/French Forum}}</ref>
It is also described as a specific measure in different fields of application. For skewed distributions, the semivariance can provide additional information that a variance does not.<ref>{{cite web |url=https://famafrench.dimensional.com/questions-answers/qa-semi-variance-a-better-risk-measure.aspx |title=Q&A: Semi-Variance: A Better Risk Measure? |last1=Fama |first1=Eugene F. |last2=French |first2=Kenneth R. |date=2010-04-21 |website=Fama/French Forum }}</ref>


For inequalities associated with the semivariance, see {{Section link|Chebyshev's inequality|Semivariances}}.
For inequalities associated with the semivariance, see ''{{slink|Chebyshev's inequality|Semivariances}}''.


==Etymology==
== Etymology ==
The term ''variance'' was first introduced by [[Ronald Fisher]] in his 1918 paper ''[[The Correlation Between Relatives on the Supposition of Mendelian Inheritance]]'':<ref>[[Ronald Fisher]] (1918) [http://digital.library.adelaide.edu.au/dspace/bitstream/2440/15097/1/9.pdf The correlation between relatives on the supposition of Mendelian Inheritance]</ref>
The term ''variance'' was first introduced by [[Ronald Fisher]] in his 1918 paper ''[[The Correlation Between Relatives on the Supposition of Mendelian Inheritance]]'':<ref>[[Ronald Fisher]] (1918) [http://digital.library.adelaide.edu.au/dspace/bitstream/2440/15097/1/9.pdf The correlation between relatives on the supposition of Mendelian Inheritance]</ref>


<blockquote>The great body of available statistics show us that the deviations of a [[biometry|human measurement]] from its mean follow very closely the [[Normal distribution|Normal Law of Errors]], and, therefore, that the variability may be uniformly measured by the [[standard deviation]] corresponding to the [[square root]] of the [[mean square error]]. When there are two independent causes of variability capable of producing in an otherwise uniform population distributions with standard deviations <math>\sigma_1</math> and <math>\sigma_2</math>, it is found that the distribution, when both causes act together, has a standard deviation <math>\sqrt{\sigma_1^2 + \sigma_2^2}</math>.  It is therefore desirable in analysing the causes of variability to deal with the square of the standard deviation as the measure of variability.  We shall term this quantity the Variance...</blockquote>
<blockquote>The great body of available statistics show us that the deviations of a [[biometry|human measurement]] from its mean follow very closely the [[Normal distribution|Normal Law of Errors]], and, therefore, that the variability may be uniformly measured by the [[standard deviation]] corresponding to the [[square root]] of the [[mean square error]]. When there are two independent causes of variability capable of producing in an otherwise uniform population distributions with standard deviations <math>\sigma_1</math> and <math>\sigma_2</math>, it is found that the distribution, when both causes act together, has a standard deviation <math>\sqrt{\sigma_1^2 + \sigma_2^2}</math>.  It is therefore desirable in analysing the causes of variability to deal with the square of the standard deviation as the measure of variability.  We shall term this quantity the Variance...</blockquote>


==Generalizations==
== Generalizations ==


===For complex variables===
=== For complex variables ===
If <math>x</math> is a scalar [[complex number|complex]]-valued random variable, with values in <math>\mathbb{C},</math> then its variance is <math>\operatorname{E}\left[(x - \mu)(x - \mu)^*\right],</math> where <math>x^*</math> is the [[complex conjugate]] of <math>x.</math> This variance is a real scalar.
If <math>x</math> is a scalar [[complex number|complex]]-valued random variable, with values in {{tmath| \mathbb{C} }}, then its variance is {{tmath| \operatorname{E}\left[(x - \mu)(x - \mu)^*\right] }}, where <math>x^*</math> is the [[complex conjugate]] of {{tmath| x }}.  This variance is a real scalar.


===For vector-valued random variables===
=== For vector-valued random variables ===


====As a matrix====
==== As a matrix ====
If <math>X</math> is a [[vector space|vector]]-valued random variable, with values in <math>\mathbb{R}^n,</math> and thought of as a column vector, then a natural generalization of variance is <math>\operatorname{E}\left[(X - \mu) {(X - \mu)}^{\mathsf{T}}\right],</math> where <math>\mu = \operatorname{E}(X)</math> and <math>X^{\mathsf{T}}</math> is the transpose of {{mvar|X}}, and so is a row vector.  The result is a [[positive definite matrix|positive semi-definite square matrix]], commonly referred to as the [[variance-covariance matrix]] (or simply as the ''covariance matrix'').
If <math>X</math> is a [[vector space|vector]]-valued random variable, with values in <math>\mathbb{R}^n,</math> and thought of as a column vector, then a natural generalization of variance is <math>\operatorname{E}\left[(X - \mu) {(X - \mu)}^{\mathsf{T}}\right],</math> where <math>\mu = \operatorname{E}(X)</math> and <math>X^{\mathsf{T}}</math> is the transpose of {{mvar|X}}, and so is a row vector.  The result is a [[positive definite matrix|positive semi-definite square matrix]], commonly referred to as the [[variance-covariance matrix]] (or simply as the ''covariance matrix'').


If <math>X</math> is a vector- and complex-valued random variable, with values in <math>\mathbb{C}^n,</math> then the [[Covariance matrix#Complex random vectors|covariance matrix is]] <math>\operatorname{E}\left[(X - \mu){(X - \mu)}^\dagger\right],</math> where <math>X^\dagger</math> is the [[conjugate transpose]] of <math>X.</math>{{Citation needed|date=September 2016}}  This matrix is also positive semi-definite and square.
If <math>X</math> is a vector- and complex-valued random variable, with values in {{tmath| \mathbb{C}^n }}, then the [[Covariance matrix#Complex random vectors|covariance matrix is]] {{tmath| \operatorname{E}\left[(X - \mu){(X - \mu)}^\dagger\right] }}, where <math>X^\dagger</math> is the [[conjugate transpose]] of {{tmath| X }}.{{citation needed|date=September 2016}}  This matrix is also positive semi-definite and square.


====As a scalar====
==== As a scalar ====
Another generalization of variance for vector-valued random variables <math>X</math>, which results in a scalar value rather than in a matrix, is the [[generalized variance]] <math>\det(C)</math>, the [[determinant]] of the covariance matrix. The generalized variance can be shown to be related to the multidimensional scatter of points around their mean.<ref>{{cite book |last1=Kocherlakota |first1=S. |title=Encyclopedia of Statistical Sciences |last2=Kocherlakota |first2=K. |chapter=Generalized Variance |publisher=Wiley Online Library |doi=10.1002/0471667196.ess0869 |year=2004 |isbn=0-471-66719-6 }}</ref>
Another generalization of variance for vector-valued random variables <math>X</math>, which results in a scalar value rather than in a matrix, is the [[generalized variance]] {{tmath| \det(C) }}, the [[determinant]] of the covariance matrix. The generalized variance can be shown to be related to the multidimensional scatter of points around their mean.<ref>{{cite book |last1=Kocherlakota |first1=S. |title=Encyclopedia of Statistical Sciences |last2=Kocherlakota |first2=K. |chapter=Generalized Variance |publisher=Wiley Online Library |doi=10.1002/0471667196.ess0869 |year=2004 |isbn=0-471-66719-6 }}</ref>


A different generalization is obtained by considering the equation for the scalar variance, <math> \operatorname{Var}(X) = \operatorname{E}\left[(X - \mu)^2 \right] </math>, and reinterpreting <math>(X - \mu)^2</math> as the squared [[Euclidean distance]] between the random variable and its mean, or, simply as the scalar product of the vector <math>X - \mu</math> with itself. This results in <math>\operatorname{E}\left[(X - \mu)^{\mathsf{T}}(X - \mu)\right] = \operatorname{tr}(C),</math> which is the [[Trace (linear algebra)|trace]] of the covariance matrix.
A different generalization is obtained by considering the equation for the scalar variance, <math> \operatorname{Var}(X) = \operatorname{E}\left[(X - \mu)^2 \right] </math>, and reinterpreting <math>(X - \mu)^2</math> as the squared [[Euclidean distance]] between the random variable and its mean, or, simply as the scalar product of the vector <math>X - \mu</math> with itself. This results in <math>\operatorname{E}\left[(X - \mu)^{\mathsf{T}}(X - \mu)\right] = \operatorname{tr}(C),</math> which is the [[Trace (linear algebra)|trace]] of the covariance matrix.


==See also==
== See also ==
{{Portal|Mathematics}}
{{portal|Mathematics}}
{{Wiktionary|variance}}
{{wiktionary|variance}}
* [[Bhatia–Davis inequality]]
* [[Bhatia–Davis inequality]]
* [[Coefficient of variation]]
* [[Coefficient of variation]]
Line 621: Line 561:


== References ==
== References ==
{{Reflist}}
{{reflist}}


{{-}}
{{-}}
{{Theory of probability distributions}}
{{theory of probability distributions}}
{{Statistics|descriptive|state=collapsed}}
{{statistics|descriptive|state=collapsed}}


{{Authority control}}
{{authority control}}


[[Category:Moments (mathematics)]]
[[Category:Moments (mathematics)]]
[[Category:Statistical deviation and dispersion]]
[[Category:Statistical deviation and dispersion]]
[[Category:Articles containing proofs]]
[[Category:Articles containing proofs]]

Latest revision as of 15:07, 7 December 2025

Template:Short description Script error: No such module "about".

File:Comparison standard deviations.svg
Example of samples from two populations with the same mean but different variances. The red population has mean μ = 100Script error: No such module "Check for unknown parameters". and variance σ2 = 100Script error: No such module "Check for unknown parameters". (σ = 10Script error: No such module "Check for unknown parameters".), while the blue population has mean μ = 100Script error: No such module "Check for unknown parameters". and variance σ2 = 2500Script error: No such module "Check for unknown parameters". (σ = 50Script error: No such module "Check for unknown parameters".).

In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation is obtained as the square root of the variance. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers are spread out from their average value. It is the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by Template:Tmath, Template:Tmath, Template:Tmath, Template:Tmath, or Template:Tmath.[1]

An advantage of variance as a measure of dispersion is that it is more amenable to algebraic manipulation than other measures of dispersion such as the expected absolute deviation; for example, the variance of a sum of uncorrelated random variables is equal to the sum of their variances. A disadvantage of the variance for practical applications is that, unlike the standard deviation, its units differ from the random variable, which is why the standard deviation is more commonly reported as a measure of dispersion once the calculation is finished. Another disadvantage is that the variance is not finite for many distributions.

There are two distinct concepts that are both called "variance". One, as discussed above, is part of a theoretical probability distribution and is defined by an equation. The other variance is a characteristic of a set of observations. When variance is calculated from observations, those observations are typically measured from a real-world system. If all possible observations of the system are present, then the calculated variance is called the population variance. Normally, however, only a subset is available, and the variance calculated from this is called the sample variance. The variance calculated from a sample is considered an estimate of the full population variance. There are multiple ways to estimate the population variance on the basis of the sample variance, as discussed in the section below.

The two kinds of variance are closely related. To see how, consider that a theoretical probability distribution can be used as a generator of hypothetical observations. If an infinite number of observations are generated using a distribution, then the sample variance calculated from that infinite set will match the value calculated using the distribution's equation for variance. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling.

<templatestyles src="Template:TOC limit/styles.css" />

File:Variance visualisation.svg
Geometric visualisation of the variance of an arbitrary distribution (2, 4, 4, 4, 5, 5, 7, 9): Template:Ordered list

Definition

The variance of a random variable X is the expected value of the squared deviation from the mean of Template:Tmath, Template:Tmath: Var(X)=E[(Xμ)2]. This definition encompasses random variables that are generated by processes that are discrete, continuous, neither, or mixed. The variance can also be thought of as the covariance of a random variable with itself: Var(X)=Cov(X,X).

The variance is also equivalent to the second cumulant of a probability distribution that generates Template:Tmath. The variance is typically designated as Template:Tmath, or sometimes as V(X) or Template:Tmath, or symbolically as Template:Tmath or simply σ2 (pronounced "sigma squared"). The expression for the variance can be expanded as follows: Var(X)=E[(XE[X])2]=E[X22XE[X]+E[X]2]=E[X2]2E[X]E[X]+E[X]2=E[X2]2E[X]2+E[X]2=E[X2]E[X]2

In other words, the variance of Template:Mvar is equal to the mean of the square of Template:Mvar minus the square of the mean of Template:Mvar. This equation should not be used for computations using floating-point arithmetic, because it suffers from catastrophic cancellation if the two components of the equation are similar in magnitude. For other numerically stable alternatives, see Algorithms for calculating variance.

Discrete random variable

If the generator of random variable X is discrete with probability mass function x1p1,x2p2,,xnpn, then Var(X)=i=1npi(xiμ)2, where μ is the expected value. That is, μ=i=1npixi. (When such a discrete weighted variance is specified by weights whose sum is not 1, then one divides by the sum of the weights.)

The variance of a collection of n equally likely values can be written as Var(X)=1ni=1n(xiμ)2 where μ is the average value. That is, μ=1ni=1nxi.

The variance of a set of n equally likely values can be equivalently expressed, without directly referring to the mean, in terms of squared deviations of all pairwise squared distances of points from each other:[2] Var(X)=1n2i=1nj=1n12(xixj)2=1n2ij>i(xixj)2.

Absolutely continuous random variable

If the random variable X has a probability density function Template:Tmath, and F(x) is the corresponding cumulative distribution function, then Var(X)=σ2=(xμ)2f(x)dx=x2f(x)dx2μxf(x)dx+μ2f(x)dx=x2dF(x)2μxdF(x)+μ2dF(x)=x2dF(x)2μμ+μ21=x2dF(x)μ2, or equivalently, Var(X)=x2f(x)dxμ2, where μ is the expected value of X given by μ=xf(x)dx=xdF(x).

In these formulas, the integrals with respect to dx and dF(x) are Lebesgue and Lebesgue–Stieltjes integrals, respectively.

If the function x2f(x) is Riemann-integrable on every finite interval [a,b], then Var(X)=+x2f(x)dxμ2, where the integral is an improper Riemann integral.

Examples

Exponential distribution

The exponential distribution with parameter Template:Mvar > 0 is a continuous distribution whose probability density function is given by f(x)=λeλx on the interval Template:Closed-open. Its mean can be shown to be E[X]=0xλeλxdx=1λ.

Using integration by parts and making use of the expected value already calculated, we have: E[X2]=0x2λeλxdx=[x2eλx]0+02xeλxdx=0+2λE[X]=2λ2.

Thus, the variance of Template:Mvar is given by Var(X)=E[X2]E[X]2=2λ2(1λ)2=1λ2.

Fair die

A fair six-sided die can be modeled as a discrete random variable, Template:Mvar, with outcomes 1 through 6, each with equal probability 1/6. The expected value of Template:Mvar is (1+2+3+4+5+6)/6=7/2. Therefore, the variance of Template:Mvar is Var(X)=i=1616(i72)2=16((5/2)2+(3/2)2+(1/2)2+(1/2)2+(3/2)2+(5/2)2)=35122.92.

The general formula for the variance of the outcome, Template:Mvar, of an Template:Mvar-sided die is Var(X)=E(X2)(E(X))2=1ni=1ni2(1ni=1ni)2=(n+1)(2n+1)6(n+12)2=n2112.

Commonly used probability distributions

The following table lists the variance for some commonly used probability distributions.

Name of the probability distribution Probability distribution function Mean Variance
Binomial distribution Pr(X=k)=(nk)pk(1p)nk np np(1p)
Geometric distribution Pr(X=k)=(1p)k1p 1p (1p)p2
Normal distribution f(xμ,σ2)=12πσ2e12(xμσ)2 μ σ2
Uniform distribution (continuous) f(xa,b)={1bafor axb,0for x<a or x>b a+b2 (ba)212
Exponential distribution f(xλ)=λeλx 1λ 1λ2
Poisson distribution f(kλ)=eλλkk! λ λ

Properties

Basic properties

Variance is non-negative because the squares are positive or zero: Var(X)0.

The variance of a constant is zero. Var(a)=0.

Conversely, if the variance of a random variable is 0, then it is almost surely a constant. That is, it always has the same value: Var(X)=0a:P(X=a)=1.

Issues of finiteness

If a distribution does not have a finite expected value, as is the case for the Cauchy distribution, then the variance cannot be finite either. However, some distributions may not have a finite variance, despite their expected value being finite. An example is a Pareto distribution whose index k satisfies 1<k2.

Decomposition

The general formula for variance decomposition or the law of total variance is: If X and Y are two random variables, and the variance of X exists, then Var[X]=E(Var[XY])+Var(E[XY]).

The conditional expectation E(XY) of X given Y, and the conditional variance Var(XY) may be understood as follows. Given any particular value y of the random variable Y, there is a conditional expectation E(XY=y) given the event Y = y. This quantity depends on the particular value y; it is a function g(y)=E(XY=y). That same function evaluated at the random variable Y is the conditional expectation Template:Tmath.

In particular, if Y is a discrete random variable assuming possible values y1,y2,y3 with corresponding probabilities p1,p2,p3,, then in the formula for total variance, the first term on the right-hand side becomes E(Var[XY])=ipiσi2, where σi2=Var[XY=yi]. Similarly, the second term on the right-hand side becomes Var(E[XY])=ipiμi2(ipiμi)2=ipiμi2μ2, where Template:Tmath and Template:Tmath. Thus the total variance is given by Var[X]=ipiσi2+(ipiμi2μ2).

A similar formula is applied in analysis of variance, where the corresponding formula is\ MStotal=MSbetween+MSwithin; here MS refers to the Mean of the Squares. In linear regression analysis the corresponding formula is MStotal=MSregression+MSresidual.

This can also be derived from the additivity of variances, since the total (observed) score is the sum of the predicted score and the error score, where the latter two are uncorrelated.

Similar decompositions are possible for the sum of squared deviations (sum of squares, SS): SStotal=SSbetween+SSwithin, SStotal=SSregression+SSresidual.

Calculation from the CDF

The population variance for a non-negative random variable can be expressed in terms of the cumulative distribution function F using 20u(1F(u))du[0(1F(u))du]2.

This expression can be used to calculate the variance in situations where the CDF, but not the density, can be conveniently expressed.

Characteristic property

The second moment of a random variable attains the minimum value when taken around the first moment (i.e., mean) of the random variable, i.e. Template:Tmath. Conversely, if a continuous function φ satisfies argminmE(φ(Xm))=E(X) for all random variables XScript error: No such module "Check for unknown parameters"., then it is necessarily of the form Template:Tmath, where a > 0Script error: No such module "Check for unknown parameters".. This also holds in the multidimensional case.[3]

Units of measurement

Unlike the expected absolute deviation, the variance of a variable has units that are the square of the units of the variable itself. For example, a variable measured in meters will have a variance measured in meters squared. For this reason, describing data sets via their standard deviation or root mean square deviation is often preferred over using the variance. In the dice example the standard deviation is

  1. REDIRECT Template:Radic

Template:Rcat shell ≈ 1.7Script error: No such module "Check for unknown parameters"., slightly larger than the expected absolute deviation of 1.5.

The standard deviation and the expected absolute deviation can both be used as an indicator of the "spread" of a distribution. The standard deviation is more amenable to algebraic manipulation than the expected absolute deviation, and, together with variance and its generalization covariance, is used frequently in theoretical statistics; however the expected absolute deviation tends to be more robust as it is less sensitive to outliers arising from measurement anomalies or an unduly heavy-tailed distribution.

Propagation

Addition and multiplication by a constant

Variance is invariant with respect to changes in a location parameter. That is, if a constant is added to all values of the variable, the variance is unchanged: Var(X+a)=Var(X).

If all values are scaled by a constant, the variance is scaled by the square of that constant: Var(aX)=a2Var(X).

The variance of a sum of two random variables is given by Var(aX+bY)=a2Var(X)+b2Var(Y)+2abCov(X,Y)Var(aXbY)=a2Var(X)+b2Var(Y)2abCov(X,Y) where Cov(X,Y) is the covariance.

Linear combinations

In general, for the sum of N random variables Template:Tmath, the variance becomes: Var(i=1NXi)=i,j=1NCov(Xi,Xj)=i=1NVar(Xi)+i,j=1,ijNCov(Xi,Xj), see also general Bienaymé's identity.

These results lead to the variance of a linear combination as: Var(i=1NaiXi)=i,j=1NaiajCov(Xi,Xj)=i=1Nai2Var(Xi)+ijaiajCov(Xi,Xj)=i=1Nai2Var(Xi)+21i<jNaiajCov(Xi,Xj).

If the random variables X1,,XN are such that Cov(Xi,Xj)=0 ,  (ij), then they are said to be uncorrelated. It follows immediately from the expression given earlier that if the random variables X1,,XN are uncorrelated, then the variance of their sum is equal to the sum of their variances, or, expressed symbolically: Var(i=1NXi)=i=1NVar(Xi).

Since independent random variables are always uncorrelated (see Template:Section link), the equation above holds in particular when the random variables X1,,Xn are independent. Thus, independence is sufficient but not necessary for the variance of the sum to equal the sum of the variances.

Matrix notation for the variance of a linear combination

Define X as a column vector of n random variables Template:Tmath, and c as a column vector of n scalars Template:Tmath. Therefore, cTX is a linear combination of these random variables, where cT denotes the transpose of Template:Tmath. Also let Σ be the covariance matrix of Template:Tmath. The variance of cTX is then given by:[4] Var(cTX)=cTΣc.

This implies that the variance of the mean can be written as (with a column vector of ones) Var(x¯)=Var(1n1X)=1n21Σ1.

Sum of variables

Sum of uncorrelated variables

Template:Main article Script error: No such module "Labelled list hatnote". One reason for the use of the variance in preference to other measures of dispersion is that the variance of the sum (or the difference) of uncorrelated random variables is the sum of their variances: Var(i=1nXi)=i=1nVar(Xi).

This statement is called the Bienaymé formula[5] and was discovered in 1853.[6][7] It is often made with the stronger condition that the variables are independent, but being uncorrelated suffices. So if all the variables have the same variance σ2Script error: No such module "Check for unknown parameters"., then, since division by nScript error: No such module "Check for unknown parameters". is a linear transformation, this formula immediately implies that the variance of their mean is Var(X)=Var(1ni=1nXi)=1n2i=1nVar(Xi)=1n2nσ2=σ2n.

That is, the variance of the mean decreases when n increases. This formula for the variance of the mean is used in the definition of the standard error of the sample mean, which is used in the central limit theorem.

To prove the initial statement, it suffices to show that Var(X+Y)=Var(X)+Var(Y).

The general result then follows by induction. Starting with the definition, Var(X+Y)=E[(X+Y)2](E[X+Y])2=E[X2+2XY+Y2](E[X]+E[Y])2.

Using the linearity of the expectation operator and the assumption of independence (or uncorrelatedness) of X and Y, this further simplifies as follows: Var(X+Y)=E[X2]+2E[XY]+E[Y2](E[X]2+2E[X]E[Y]+E[Y]2)=E[X2]+E[Y2]E[X]2E[Y]2=Var(X)+Var(Y).

Sum of correlated variables

Sum of correlated variables with fixed sample size

Template:Main article In general, the variance of the sum of nScript error: No such module "Check for unknown parameters". variables is the sum of their covariances: Var(i=1nXi)=i=1nj=1nCov(Xi,Xj)=i=1nVar(Xi)+21i<jnCov(Xi,Xj). (Note: The second equality comes from the fact that Cov(Xi, Xi) = Var(Xi)Script error: No such module "Check for unknown parameters"..)

Here, Cov(,) is the covariance, which is zero for independent random variables (if it exists). The formula states that the variance of a sum is equal to the sum of all elements in the covariance matrix of the components. The next expression states equivalently that the variance of the sum is the sum of the diagonal of covariance matrix plus two times the sum of its upper triangular elements (or its lower triangular elements); this emphasizes that the covariance matrix is symmetric. This formula is used in the theory of Cronbach's alpha in classical test theory.

So, if the variables have equal variance σ2Script error: No such module "Check for unknown parameters". and the average correlation of distinct variables is ρScript error: No such module "Check for unknown parameters"., then the variance of their mean is Var(X)=σ2n+n1nρσ2.

This implies that the variance of the mean increases with the average of the correlations. In other words, additional correlated observations are not as effective as additional independent observations at reducing the uncertainty of the mean. Moreover, if the variables have unit variance, for example if they are standardized, then this simplifies to Var(X)=1n+n1nρ.

This formula is used in the Spearman–Brown prediction formula of classical test theory. This converges to ρScript error: No such module "Check for unknown parameters". if nScript error: No such module "Check for unknown parameters". goes to infinity, provided that the average correlation remains constant or converges too. So for the variance of the mean of standardized variables with equal correlations or converging average correlation we have limnVar(X)=ρ.

Therefore, the variance of the mean of a large number of standardized variables is approximately equal to their average correlation. This makes clear that the sample mean of correlated variables does not generally converge to the population mean, even though the law of large numbers states that the sample mean will converge for independent variables.

Sum of uncorrelated variables with random sample size

There are cases when a sample is taken without knowing, in advance, how many observations will be acceptable according to some criterion. In such cases, the sample size NScript error: No such module "Check for unknown parameters". is a random variable whose variation adds to the variation of XScript error: No such module "Check for unknown parameters"., such that,[8] Var(i=1NXi)=E[N]Var(X)+Var(N)(E[X])2 which follows from the law of total variance.

If NScript error: No such module "Check for unknown parameters". has a Poisson distribution, then E[N]=Var(N) with estimator n = NScript error: No such module "Check for unknown parameters".. So, the estimator of Var(i=1nXi) becomes Template:Tmath, giving SE(X¯)=(Sx2+X¯2)/n (see Template:Slink).

Weighted sum of variables

Script error: No such module "Labelled list hatnote". Script error: No such module "Distinguish".

The scaling property and the Bienaymé formula, along with the property of the covariance Cov(aXbY) = ab Cov(XY)Script error: No such module "Check for unknown parameters". jointly imply that Var(aX±bY)=a2Var(X)+b2Var(Y)±2abCov(X,Y).

This implies that in a weighted sum of variables, the variable with the largest weight will have a disproportionally large weight in the variance of the total. For example, if X and Y are uncorrelated and the weight of X is two times the weight of Y, then the weight of the variance of X will be four times the weight of the variance of Y.

The expression above can be extended to a weighted sum of multiple variables: Var(inaiXi)=i=1nai2Var(Xi)+21i<jnaiajCov(Xi,Xj)

Product of variables

Product of independent variables

If two variables X and Y are independent, the variance of their product is given by[9] Var(XY)=[E(X)]2Var(Y)+[E(Y)]2Var(X)+Var(X)Var(Y).

Equivalently, using the basic properties of expectation, it is given by Var(XY)=E(X2)E(Y2)[E(X)]2[E(Y)]2.

Product of statistically dependent variables

In general, if two variables are statistically dependent, then the variance of their product is given by: Var(XY)=E[X2Y2][E(XY)]2=Cov(X2,Y2)+E(X2)E(Y2)[E(XY)]2=Cov(X2,Y2)+(Var(X)+[E(X)]2)(Var(Y)+[E(Y)]2)[Cov(X,Y)+E(X)E(Y)]2

Arbitrary functions

Script error: No such module "Labelled list hatnote".

The delta method uses second-order Taylor expansions to approximate the variance of a function of one or more random variables (see Taylor expansions for the moments of functions of random variables). For example, the approximate variance of a function of one variable is given by Var[f(X)](f(E[X]))2Var[X] provided that fScript error: No such module "Check for unknown parameters". is twice differentiable and that the mean and variance of XScript error: No such module "Check for unknown parameters". are finite.

Population variance and sample variance

Script error: No such module "anchor". Script error: No such module "Labelled list hatnote". Real-world observations such as the measurements of yesterday's rain throughout the day typically cannot be complete sets of all possible observations that could be made. As such, the variance calculated from the finite set will in general not match the variance that would have been calculated from the full population of possible observations. This means that one estimates the mean and variance from a limited set of observations by using an estimator equation. The estimator is a function of the sample of nScript error: No such module "Check for unknown parameters". observations drawn without observational bias from the whole population of potential observations. In this example, the sample would be the set of actual measurements of yesterday's rainfall from available rain gauges within the geography of interest.

The simplest estimators for population mean and population variance are simply the mean and variance of the sample, the sample mean and (uncorrected) sample variance – these are consistent estimators (they converge to the value of the whole population as the number of samples increases) but can be improved. Most simply, the sample variance is computed as the sum of squared deviations about the (sample) mean, divided by nScript error: No such module "Check for unknown parameters". as the number of samples. However, using values other than nScript error: No such module "Check for unknown parameters". improves the estimator in various ways. Four common values for the denominator are nScript error: No such module "Check for unknown parameters"., n − 1Script error: No such module "Check for unknown parameters"., n + 1Script error: No such module "Check for unknown parameters"., and n − 1.5Script error: No such module "Check for unknown parameters".: nScript error: No such module "Check for unknown parameters". is the simplest (the variance of the sample), n − 1Script error: No such module "Check for unknown parameters". eliminates bias,[10] n + 1Script error: No such module "Check for unknown parameters". minimizes mean squared error for the normal distribution,[11] and n − 1.5Script error: No such module "Check for unknown parameters". mostly eliminates bias in unbiased estimation of standard deviation for the normal distribution.[12]

Firstly, if the true population mean is unknown, then the sample variance (which uses the sample mean in place of the true mean) is a biased estimator: it underestimates the variance by a factor of (n − 1) / nScript error: No such module "Check for unknown parameters".; correcting this factor, resulting in the sum of squared deviations about the sample mean divided by n − 1Script error: No such module "Check for unknown parameters". instead of nScript error: No such module "Check for unknown parameters"., is called Bessel's correction.[10] The resulting estimator is unbiased and is called the (corrected) sample variance or unbiased sample variance. If the mean is determined in some other way than from the same samples used to estimate the variance, then this bias does not arise, and the variance can safely be estimated as that of the samples about the (independently known) mean.

Secondly, the sample variance does not generally minimize mean squared error between sample variance and population variance. Correcting for bias often makes this worse: one can always choose a scale factor that performs better than the corrected sample variance, though the optimal scale factor depends on the excess kurtosis of the population (see Template:Slink) and introduces bias. This always consists of scaling down the unbiased estimator (dividing by a number larger than n − 1Script error: No such module "Check for unknown parameters".) and is a simple example of a shrinkage estimator: one "shrinks" the unbiased estimator towards zero. For the normal distribution, dividing by n + 1Script error: No such module "Check for unknown parameters". (instead of n − 1Script error: No such module "Check for unknown parameters". or nScript error: No such module "Check for unknown parameters".) minimizes mean squared error.[11] The resulting estimator is biased, however, and is known as the biased sample variation.

Population variance

In general, the population variance of a finite population of size Template:Mvar with values xiScript error: No such module "Check for unknown parameters". is given by σ2=1Ni=1N(xiμ)2=1Ni=1N(xi22μxi+μ2)=(1Ni=1Nxi2)2μ(1Ni=1Nxi)+μ2=E[xi2]μ2 where the population mean is μ=E[xi]=1Ni=1Nxi and Template:Tmath, where E is the expectation value operator.

The population variance can also be computed using[13] σ2=1N2i<j(xixj)2=12N2i,j=1N(xixj)2.

(The right side has duplicate terms in the sum while the middle side has only unique terms to sum.) This is true because 12N2i,j=1N(xixj)2=12N2i,j=1N(xi22xixj+xj2)=12Nj=1N(1Ni=1Nxi2)(1Ni=1Nxi)(1Nj=1Nxj)+12Ni=1N(1Nj=1Nxj2)=12(σ2+μ2)μ2+12(σ2+μ2)=σ2.

The population variance matches the variance of the generating probability distribution. In this sense, the concept of population can be extended to continuous random variables with infinite populations.

Sample variance

Script error: No such module "Labelled list hatnote".

Biased sample variance

In many practical situations, the true variance of a population is not known a priori and must be computed somehow. When dealing with extremely large populations, it is not possible to count every object in the population, so the computation must be performed on a sample of the population.[14] This is generally referred to as sample variance or empirical variance. Sample variance can also be applied to the estimation of the variance of a continuous distribution from a sample of that distribution.

We take a sample with replacement of Template:Mvar values Y1, ..., YnScript error: No such module "Check for unknown parameters". from the population of size Template:Mvar, where n < NScript error: No such module "Check for unknown parameters"., and estimate the variance on the basis of this sample.[15] Directly taking the variance of the sample data gives the average of the squared deviations:[16] S~Y2=1ni=1n(YiY)2=(1ni=1nYi2)Y2=1n2i,j:i<j(YiYj)2. (See the section Template:Slink for the derivation of this formula.) Here, Y denotes the sample mean: Y=1ni=1nYi.

Since the YiScript error: No such module "Check for unknown parameters". are selected randomly, both Y and S~Y2 are random variables. Their expected values can be evaluated by averaging over the ensemble of all possible samples Template:MsetScript error: No such module "Check for unknown parameters". of size Template:Mvar from the population. For S~Y2 this gives: E[S~Y2]=E[1ni=1n(Yi1nj=1nYj)2]=1ni=1nE[Yi22nYij=1nYj+1n2j=1nYjk=1nYk]=1ni=1n(E[Yi2]2n(jiE[YiYj]+E[Yi2])+1n2j=1nkjnE[YjYk]+1n2j=1nE[Yj2])=1ni=1n(n2nE[Yi2]2njiE[YiYj]+1n2j=1nkjnE[YjYk]+1n2j=1nE[Yj2])=1ni=1n[n2n(σ2+μ2)2n(n1)μ2+1n2n(n1)μ2+1n(σ2+μ2)]=n1nσ2.

Here σ2=E[Yi2]μ2 derived in the section is population variance and E[YiYj]=E[Yi]E[Yj]=μ2 due to independency of Yi and Template:Tmath.

Hence S~Y2 gives an estimate of the population variance σ2 that is biased by a factor of n1n because the expectation value of S~Y2 is smaller than the population variance (true variance) by that factor. For this reason, S~Y2 is referred to as the biased sample variance.

Unbiased sample variance

Correcting for this bias yields the unbiased sample variance, denoted Template:Tmath: S2=nn1S~Y2=nn1[1ni=1n(YiY)2]=1n1i=1n(YiY)2

Either estimator may be simply referred to as the sample variance when the version can be determined by context. The same proof is also applicable for samples taken from a continuous probability distribution.

The use of the term n − 1Script error: No such module "Check for unknown parameters". is called Bessel's correction, and it is also used in sample covariance and the sample standard deviation (the square root of variance). The square root is a concave function and thus introduces negative bias (by Jensen's inequality), which depends on the distribution, and thus the corrected sample standard deviation (using Bessel's correction) is biased. The unbiased estimation of standard deviation is a technically involved problem, though for the normal distribution using the term n − 1.5Script error: No such module "Check for unknown parameters". yields an almost unbiased estimator.

The unbiased sample variance is a U-statistic for the function f(y1, y2) = (y1y2)2/2Script error: No such module "Check for unknown parameters"., meaning that it is obtained by averaging a 2-sample statistic over 2-element subsets of the population.

Example

For a set of numbers Template:Mset, if this set is the whole data population for some measurement, then variance is the population variance 932.743 as the sum of the squared deviations about the mean of this set, divided by 12 as the number of the set members. If the set is a sample from the whole population, then the unbiased sample variance can be calculated as 1017.538 that is the sum of the squared deviations about the mean of the sample, divided by 11 instead of 12. A function VAR.S in Microsoft Excel gives the unbiased sample variance while VAR.P is for population variance.

Distribution of the sample variance

Script error: No such module "Multiple image".

Being a function of random variables, the sample variance is itself a random variable, and it is natural to study its distribution. In the case that Yi are independent observations from a normal distribution, Cochran's theorem shows that the unbiased sample variance S2 follows a scaled chi-squared distribution (see also: asymptotic properties and an elementary proof):[17] (n1)S2σ2χn12 where σ2Script error: No such module "Check for unknown parameters". is the population variance. As a direct consequence, it follows that E(S2)=E(σ2n1χn12)=σ2, and[18] Var[S2]=Var(σ2n1χn12)=σ4(n1)2Var(χn12)=2σ4n1.

If YiScript error: No such module "Check for unknown parameters". are independent and identically distributed, but not necessarily normally distributed, then[19] E[S2]=σ2,Var[S2]=σ4n(κ1+2n1)=1n(μ4n3n1σ4), where κ is the kurtosis of the distribution and μ4Script error: No such module "Check for unknown parameters". is the fourth central moment.

If the conditions of the law of large numbers hold for the squared observations, S2Script error: No such module "Check for unknown parameters". is a consistent estimator of σ2Script error: No such module "Check for unknown parameters".. One can see indeed that the variance of the estimator tends asymptotically to zero. An asymptotically equivalent formula was given in Kenney and Keeping (1951:164), Rose and Smith (2002:264), and Weisstein (n.d.).[20][21][22]

Samuelson's inequality

Samuelson's inequality is a result that states bounds on the values that individual observations in a sample can take, given that the sample mean and (biased) variance have been calculated.[23] Values must lie within the limits Template:Tmath.

Relations with the harmonic and arithmetic means

It has been shown[24] that for a sample Template:MsetScript error: No such module "Check for unknown parameters". of positive real numbers, σy22ymax(AH), where ymaxScript error: No such module "Check for unknown parameters". is the maximum of the sample, Template:Mvar is the arithmetic mean, Template:Mvar is the harmonic mean of the sample and σy2 is the (biased) variance of the sample.

This bound has been improved, and it is known that variance is bounded by σy2ymax(AH)(ymaxA)ymaxH,σy2ymin(AH)(Aymin)Hymin, where yminScript error: No such module "Check for unknown parameters". is the minimum of the sample.[25]

Tests of equality of variances

The F-test of equality of variances and the chi square tests are adequate when the sample is normally distributed. Non-normality makes testing for the equality of two or more variances more difficult.

Several non parametric tests have been proposed: these include the Barton–David–Ansari–Freund–Siegel–Tukey test, the Capon test, Mood test, the Klotz test and the Sukhatme test. The Sukhatme test applies to two variances and requires that both medians be known and equal to zero. The Mood, Klotz, Capon and Barton–David–Ansari–Freund–Siegel–Tukey tests also apply to two variances. They allow the median to be unknown but do require that the two medians are equal.

The Lehmann test is a parametric test of two variances. Of this test there are several variants known. Other tests of the equality of variances include the Box test, the Box–Anderson test and the Moses test.

Resampling methods, which include the bootstrap and the jackknife, may be used to test the equality of variances.

Moment of inertia

Script error: No such module "Labelled list hatnote". The variance of a probability distribution is analogous to the moment of inertia in classical mechanics of a corresponding mass distribution along a line, with respect to rotation about its center of mass.[26] It is because of this analogy that such things as the variance are called moments of probability distributions.[26] The covariance matrix is related to the moment of inertia tensor for multivariate distributions. The moment of inertia of a cloud of n points with a covariance matrix of Σ is given byScript error: No such module "Unsubst". I=n(𝟏3×3tr(Σ)Σ).

This difference between moment of inertia in physics and in statistics is clear for points that are gathered along a line. Suppose many points are close to the x axis and distributed along it. The covariance matrix might look like Σ=[100000.10000.1].

That is, there is the most variance in the x direction. Physicists would consider this to have a low moment about the x axis so the moment-of-inertia tensor is I=n[0.200010.100010.1].

Semivariance

The semivariance is calculated in the same manner as the variance but only those observations that fall below the mean are included in the calculation: Semivariance=1ni:xi<μ(xiμ)2 It is also described as a specific measure in different fields of application. For skewed distributions, the semivariance can provide additional information that a variance does not.[27]

For inequalities associated with the semivariance, see Template:Slink.

Etymology

The term variance was first introduced by Ronald Fisher in his 1918 paper The Correlation Between Relatives on the Supposition of Mendelian Inheritance:[28]

The great body of available statistics show us that the deviations of a human measurement from its mean follow very closely the Normal Law of Errors, and, therefore, that the variability may be uniformly measured by the standard deviation corresponding to the square root of the mean square error. When there are two independent causes of variability capable of producing in an otherwise uniform population distributions with standard deviations

σ1

and

σ2

, it is found that the distribution, when both causes act together, has a standard deviation

σ12+σ22

. It is therefore desirable in analysing the causes of variability to deal with the square of the standard deviation as the measure of variability. We shall term this quantity the Variance...

Generalizations

For complex variables

If x is a scalar complex-valued random variable, with values in Template:Tmath, then its variance is Template:Tmath, where x* is the complex conjugate of Template:Tmath. This variance is a real scalar.

For vector-valued random variables

As a matrix

If X is a vector-valued random variable, with values in n, and thought of as a column vector, then a natural generalization of variance is E[(Xμ)(Xμ)T], where μ=E(X) and XT is the transpose of Template:Mvar, and so is a row vector. The result is a positive semi-definite square matrix, commonly referred to as the variance-covariance matrix (or simply as the covariance matrix).

If X is a vector- and complex-valued random variable, with values in Template:Tmath, then the covariance matrix is Template:Tmath, where X is the conjugate transpose of Template:Tmath.Script error: No such module "Unsubst". This matrix is also positive semi-definite and square.

As a scalar

Another generalization of variance for vector-valued random variables X, which results in a scalar value rather than in a matrix, is the generalized variance Template:Tmath, the determinant of the covariance matrix. The generalized variance can be shown to be related to the multidimensional scatter of points around their mean.[29]

A different generalization is obtained by considering the equation for the scalar variance, Var(X)=E[(Xμ)2], and reinterpreting (Xμ)2 as the squared Euclidean distance between the random variable and its mean, or, simply as the scalar product of the vector Xμ with itself. This results in E[(Xμ)T(Xμ)]=tr(C), which is the trace of the covariance matrix.

See also

Script error: No such module "Portal". Template:Sister project

Types of variance

References

<templatestyles src="Reflist/styles.css" />

  1. Script error: No such module "citation/CS1".
  2. Script error: No such module "citation/CS1".
  3. Script error: No such module "Citation/CS1".
  4. Script error: No such module "citation/CS1".
  5. Loève, M. (1977) "Probability Theory", Graduate Texts in Mathematics, Volume 45, 4th edition, Springer-Verlag, p. 12.
  6. Bienaymé, I.-J. (1853) "Considérations à l'appui de la découverte de Laplace sur la loi de probabilité dans la méthode des moindres carrés", Comptes rendus de l'Académie des sciences Paris, 37, p. 309–317; digital copy available [1] Template:Webarchive
  7. Bienaymé, I.-J. (1867) "Considérations à l'appui de la découverte de Laplace sur la loi de probabilité dans la méthode des moindres carrés", Journal de Mathématiques Pures et Appliquées, Série 2, Tome 12, p. 158–167; digital copy available [2][3]
  8. Cornell, J R, and Benjamin, C A, Probability, Statistics, and Decisions for Civil Engineers, McGraw-Hill, NY, 1970, pp. 178-9.
  9. Script error: No such module "Citation/CS1".
  10. a b Script error: No such module "citation/CS1".
  11. a b Script error: No such module "Citation/CS1".
  12. Script error: No such module "Citation/CS1".
  13. Script error: No such module "citation/CS1".
  14. Script error: No such module "citation/CS1".
  15. Montgomery, D. C. and Runger, G. C. (1994) Applied statistics and probability for engineers, page 201. John Wiley & Sons New York
  16. Script error: No such module "citation/CS1".
  17. Script error: No such module "citation/CS1".
  18. Script error: No such module "citation/CS1".
  19. Mood, A. M., Graybill, F. A., and Boes, D.C. (1974) Introduction to the Theory of Statistics, 3rd Edition, McGraw-Hill, New York, p. 229
  20. Script error: No such module "citation/CS1".
  21. Rose, Colin; Smith, Murray D. (2002). "Mathematical Statistics with Mathematica". Springer-Verlag, New York.
  22. Weisstein, Eric W. "Sample Variance Distribution". MathWorld Wolfram.
  23. Script error: No such module "Citation/CS1".
  24. Script error: No such module "Citation/CS1".
  25. Script error: No such module "Citation/CS1".
  26. a b Script error: No such module "citation/CS1".
  27. Script error: No such module "citation/CS1".
  28. Ronald Fisher (1918) The correlation between relatives on the supposition of Mendelian Inheritance
  29. Script error: No such module "citation/CS1".

Script error: No such module "Check for unknown parameters".

Template:Theory of probability distributions Script error: No such module "Navbox".

Template:Authority control