Binomial distribution: Difference between revisions

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
imported>OAbot
m Open access bot: url-access updated in citation with #oabot.
 
 
Line 7: Line 7:
   | pdf_image  = [[File:Binomial distribution pmf.svg|300px|Probability mass function for the binomial distribution]]
   | pdf_image  = [[File:Binomial distribution pmf.svg|300px|Probability mass function for the binomial distribution]]
   | cdf_image  = [[File:Binomial distribution cdf.svg|300px|Cumulative distribution function for the binomial distribution]]
   | cdf_image  = [[File:Binomial distribution cdf.svg|300px|Cumulative distribution function for the binomial distribution]]
   | notation  = <math>B(n,p)</math>
   | notation  = <math>\mathrm{B}(n,p)</math>
   | parameters = <math>n \in \{0, 1, 2, \ldots\}</math> &ndash; number of trials<br /><math>p \in [0,1]</math> &ndash; success probability for each trial<br /><math>q = 1 - p</math>
   | parameters = <math>n \in \{0, 1, 2, \ldots\}</math> &ndash; number of trials<br /><math>p \in [0,1]</math> &ndash; success probability for each trial<br /><math>q = 1 - p</math>
   | support    = <math>k \in \{0, 1, \ldots, n\}</math> &ndash; number of successes
   | support    = <math>k \in \{0, 1, \ldots, n\}</math> &ndash; number of successes
Line 28: Line 28:
[[File:Pascal's triangle; binomial distribution.svg|thumb|280px|Binomial distribution for {{math|p {{=}} 0.5}}<br />with {{mvar|n}} and {{mvar|k}} as in [[Pascal's triangle]]<br /><br />The probability that a ball in a [[Bean machine|Galton box]] with 8 layers ({{math|''n'' {{=}} 8}}) ends up in the central bin ({{math|''k'' {{=}} 4}}) is {{math|70/256}}.]]
[[File:Pascal's triangle; binomial distribution.svg|thumb|280px|Binomial distribution for {{math|p {{=}} 0.5}}<br />with {{mvar|n}} and {{mvar|k}} as in [[Pascal's triangle]]<br /><br />The probability that a ball in a [[Bean machine|Galton box]] with 8 layers ({{math|''n'' {{=}} 8}}) ends up in the central bin ({{math|''k'' {{=}} 4}}) is {{math|70/256}}.]]


In [[probability theory]] and [[statistics]], the '''binomial distribution''' with parameters {{mvar|n}} and {{mvar|p}} is the [[discrete probability distribution]] of the number of successes in a sequence of {{mvar|n}} [[statistical independence|independent]] [[experiment (probability theory)|experiment]]s, each asking a [[yes–no question]], and each with its own [[Boolean-valued function|Boolean]]-valued [[outcome (probability)|outcome]]: ''success'' (with probability {{mvar|p}}) or ''failure'' (with probability {{math|''q'' {{=}} 1 − ''p''}}). A single success/failure experiment is also called a [[Bernoulli trial]] or Bernoulli experiment, and a sequence of outcomes is called a [[Bernoulli process]]; for a single trial, i.e., {{math|''n'' {{=}} 1}}, the binomial distribution is a [[Bernoulli distribution]]. The binomial distribution is the basis for the [[binomial test]] of [[statistical significance]].<ref>{{Cite book|last=Westland|first=J. Christopher|title=Audit Analytics: Data Science for the Accounting Profession|publisher=Springer|year=2020|isbn=978-3-030-49091-1|location=Chicago, IL, USA|pages=53}}</ref>
In [[probability theory]] and [[statistics]], the '''binomial distribution''' with parameters {{mvar|n}} and {{mvar|p}} is the [[discrete probability distribution]] of the number of successes in a sequence of {{mvar|n}} [[statistical independence|independent]] [[experiment (probability theory)|experiment]]s, each asking a [[yes–no question]], and each with its own [[Boolean-valued function|Boolean]]-valued [[outcome (probability)|outcome]]: ''success'' (with probability {{mvar|p}}) or ''failure'' (with probability {{math|''q'' {{=}} 1 − ''p''}}). A single success/failure experiment is also called a [[Bernoulli trial]] or Bernoulli experiment, and a sequence of outcomes is called a [[Bernoulli process]]. For a single trial, that is, when {{math|''n'' {{=}} 1}}, the binomial distribution is a [[Bernoulli distribution]]. The binomial distribution is the basis for the [[binomial test]] of [[statistical significance]].<ref>{{Cite book |last=Westland |first=J. Christopher |title=Audit Analytics: Data Science for the Accounting Profession |publisher=[[Springer Publishing]] |year=2020 |isbn=978-3-030-49091-1 |location=Chicago, IL, USA |pages=53}}</ref>


The binomial distribution is frequently used to model the number of successes in a sample of size {{mvar|n}} drawn [[with replacement]] from a population of size {{mvar|N}}. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a [[hypergeometric distribution]], not a binomial one.  However, for {{mvar|N}} much larger than {{mvar|n}}, the binomial distribution remains a good approximation, and is widely used.
The binomial distribution is frequently used to model the number of successes in a sample of size {{mvar|n}} drawn [[with replacement]] from a population of size {{mvar|N}}. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a [[hypergeometric distribution]], not a binomial one.  However, for {{mvar|N}} much larger than {{mvar|n}}, the binomial distribution remains a good approximation, and is widely used.
Line 36: Line 36:
=== Probability mass function ===
=== Probability mass function ===


If the [[random variable]] {{mvar|X}} follows the binomial distribution with parameters {{math|''n'' ∈ [[natural number|<math>\mathbb{N}</math>]]}} and {{math|''p'' ∈ {{closed-closed|0, 1}}}}, we write {{math|''X'' ~ ''B''(''n'', ''p'')}}. The probability of getting exactly {{mvar|k}} successes in {{mvar|n}} independent Bernoulli trials (with the same rate {{mvar|p}}) is given by the [[probability mass function]]:
If the [[random variable]] {{mvar|X}} follows the binomial distribution with parameters <math>n \isin \mathbb{N}</math> (a [[natural number]]) and {{math|''p'' ∈ {{closed-closed|0, 1}}}}, we write {{math|''X'' ~ ''B''(''n'', ''p'')}}. The probability of getting exactly {{mvar|k}} successes in {{mvar|n}} independent Bernoulli trials (with the same rate {{mvar|p}}) is given by the [[probability mass function]]:
: <math>f(k,n,p) = \Pr(X = k) = \binom{n}{k}p^k(1-p)^{n-k}</math>
<math display="block">f(k,n,p) = \Pr(X = k) = \binom{n}{k}p^k(1-p)^{n-k}</math>
for {{math|''k'' {{=}} 0, 1, 2, ..., ''n''}}, where
for {{math|''k'' {{=}} 0, 1, 2, ..., ''n''}}, where
: <math>\binom{n}{k} =\frac{n!}{k!(n-k)!}</math>
<math display="block">\binom{n}{k} =\frac{n!}{k!(n-k)!}</math>
is the [[binomial coefficient]]. The formula can be understood as follows: {{math|''p''{{sup|''k''}} ''q''{{sup|''n''−''k''}}}} is the probability of obtaining the sequence of {{mvar|n}} independent Bernoulli trials in which {{mvar|k}} trials are "successes" and the remaining {{math|''n'' − ''k''}} trials result in "failure". Since the trials are independent with probabilities remaining constant between them, any sequence of {{mvar|n}} trials with {{mvar|k}} successes (and {{math|''n'' − ''k''}} failures) has the same probability of being achieved (regardless of positions of successes within the sequence). There are <math display="inline">\binom{n}{k}</math> such sequences, since the binomial coefficient <math display="inline">\binom{n}{k}</math> counts the number of ways to choose the positions of the {{mvar|k}} successes among the {{mvar|n}} trials. The binomial distribution is concerned with the probability of obtaining ''any'' of these sequences, meaning the probability of obtaining one of them ({{math|''p''{{sup|''k''}} ''q''{{sup|''n''−''k''}}}}) must be added <math display="inline">\binom{n}{k}</math> times, hence <math display="inline">\Pr(X = k) = \binom{n}{k} p^k (1-p)^{n-k}</math>.
is the [[binomial coefficient]]. The formula can be understood as follows: {{math|''p''{{sup|''k''}} ''q''{{sup|''n''−''k''}}}} is the probability of obtaining the sequence of {{mvar|n}} independent Bernoulli trials in which {{mvar|k}} trials are "successes" and the remaining {{math|''n'' − ''k''}} trials are "failures". Since the trials are independent with probabilities remaining constant between them, any sequence of {{mvar|n}} trials with {{mvar|k}} successes (and {{math|''n'' − ''k''}} failures) has the same probability of being achieved (regardless of positions of successes within the sequence). There are <math display="inline">\binom{n}{k}</math> such sequences, since the binomial coefficient <math display="inline">\binom{n}{k}</math> counts the number of ways to choose the positions of the {{mvar|k}} successes among the {{mvar|n}} trials. The binomial distribution is concerned with the probability of obtaining ''any'' of these sequences, meaning the probability of obtaining one of them ({{math|''p''{{sup|''k''}} ''q''{{sup|''n''−''k''}}}}) must be added <math display="inline">\binom{n}{k}</math> times, hence <math display="inline">\Pr(X = k) = \binom{n}{k} p^k (1-p)^{n-k}</math>.


In creating reference tables for binomial distribution probability, usually, the table is filled in up to {{math|''n''/2}} values. This is because for {{math|''k'' > ''n''/2}}, the probability can be calculated by its complement as
In creating reference tables for binomial distribution probability, usually, the table is filled in up to {{math|''n'' / 2}} values. This is because for {{math|''k'' > ''n''/2}}, the probability can be calculated by its complement as
: <math>f(k,n,p)=f(n-k,n,1-p). </math>
<math display="block">f(k,n,p)=f(n-k,n,1-p). </math>


Looking at the expression {{math|''f''(''k'', ''n'', ''p'')}} as a function of {{mvar|k}}, there is a {{mvar|k}} value that maximizes it. This {{mvar|k}} value can be found by calculating
Looking at the expression {{math|''f''(''k'', ''n'', ''p'')}} as a function of {{mvar|k}}, there is a {{mvar|k}} value that maximizes it. This {{mvar|k}} value can be found by calculating
: <math> \frac{f(k+1,n,p)}{f(k,n,p)}=\frac{(n-k)p}{(k+1)(1-p)} </math>
<math display="block"> \frac{f(k+1,n,p)}{f(k,n,p)}=\frac{(n-k)p}{(k+1)(1-p)} </math>
and comparing it to 1. There is always an integer {{mvar|M}} that satisfies<ref>{{cite book |last=Feller |first=W. |title=An Introduction to Probability Theory and Its Applications |url=https://archive.org/details/introductiontopr01wfel |url-access=limited |year=1968 |publisher=Wiley |location=New York |edition=Third |page=[https://archive.org/details/introductiontopr01wfel/page/n167 151] (theorem in section VI.3) }}</ref>
and comparing it to 1. There is always an integer {{mvar|M}} that satisfies<ref>{{cite book |last=Feller |first=William |author-link=William Feller |url=https://archive.org/details/introductiontopr01wfel |title=An Introduction to Probability Theory and Its Applications |publisher=Wiley |year=1968 |edition=Third |location=New York |page=[https://archive.org/details/introductiontopr01wfel/page/n167 151] (theorem in section VI.3) |url-access=limited}}</ref>
: <math>(n+1)p-1 \leq M < (n+1)p.</math>
<math display="block">(n+1)p-1 \leq M < (n+1)p.</math>


{{math|''f''(''k'', ''n'', ''p'')}} is monotone increasing for {{math|''k'' < ''M''}} and monotone decreasing for {{math|''k'' > ''M''}}, with the exception of the case where {{math|(''n'' + 1)''p''}} is an integer. In this case, there are two values for which {{mvar|f}} is maximal: {{math|(''n'' + 1) ''p''}} and {{math|(''n'' + 1) ''p'' − 1}}. {{mvar|M}} is the ''most probable'' outcome (that is, the most likely, although this can still be unlikely overall) of the Bernoulli trials and is called the [[Mode (statistics)|mode]].
{{math|''f''(''k'', ''n'', ''p'')}} is monotone increasing for {{math|''k'' < ''M''}} and monotone decreasing for {{math|''k'' > ''M''}}, with the exception of the case where {{math|(''n'' + 1)''p''}} is an integer. In this case, there are two values for which {{mvar|f}} is maximal: {{math|(''n'' + 1)''p''}} and {{math|(''n'' + 1)''p'' − 1}}. {{mvar|M}} is the ''most probable'' outcome (that is, the most likely, although this can still be unlikely overall) of the Bernoulli trials and is called the [[Mode (statistics)|mode]].


Equivalently, {{math|''M'' − ''p'' < ''np'' ≤ ''M'' + 1 − ''p''}}. Taking the [[Floor and ceiling functions|floor function]], we obtain {{math|''M'' {{=}} floor(''np'')}}.{{NoteTag|Except the trivial case {{math|''p'' {{=}} 0}}, which must be checked separately.}}
Equivalently, {{math|''M'' − ''p'' < ''np'' ≤ ''M'' + 1 − ''p''}}. Taking the [[Floor and ceiling functions|floor function]], we obtain {{math|''M'' {{=}} floor(''np'')}}.{{NoteTag|Except the trivial case {{math|''p'' {{=}} 0}}, which must be checked separately.}}
Line 57: Line 57:


Suppose a [[fair coin|biased coin]] comes up heads with probability 0.3 when tossed. The probability of seeing exactly 4 heads in 6 tosses is
Suppose a [[fair coin|biased coin]] comes up heads with probability 0.3 when tossed. The probability of seeing exactly 4 heads in 6 tosses is
: <math>f(4,6,0.3) = \binom{6}{4}0.3^4 (1-0.3)^{6-4}= 0.059535.</math>
<math display="block">f(4,6,0.3) = \binom{6}{4} 0.3^4 (1-0.3)^{6-4}= 0.059535.</math>


=== Cumulative distribution function ===
=== Cumulative distribution function ===


The [[cumulative distribution function]] can be expressed as:
The [[cumulative distribution function]] can be expressed as:
: <math>F(k;n,p) = \Pr(X \le k) = \sum_{i=0}^{\lfloor k \rfloor} {n\choose i}p^i(1-p)^{n-i},</math>
<math display="block">F(k;n,p) = \Pr(X \le k) = \sum_{i=0}^{\lfloor k \rfloor} {n\choose i}p^i(1-p)^{n-i},</math>
where <math>\lfloor k\rfloor</math> is the "floor" under {{math|''k''}}, i.e. the [[floor and ceiling functions|greatest integer]] less than or equal to {{math|''k''}}.
where <math>\lfloor k\rfloor</math> is the "floor" under {{mvar|k}}; that is, the [[floor and ceiling functions|greatest integer]] less than or equal to {{mvar|k}}.


It can also be represented in terms of the [[regularized incomplete beta function]], as follows:<ref>{{cite book |last=Wadsworth |first=G. P. |title=Introduction to Probability and Random Variables |year=1960 |publisher=McGraw-Hill |location=New York  |page=[https://archive.org/details/introductiontopr0000wads/page/52 52] |url=https://archive.org/details/introductiontopr0000wads |url-access=registration }}</ref>
It can also be represented in terms of the [[regularized incomplete beta function]], as follows:<ref>{{cite book |last=Wadsworth |first=G. P. |title=Introduction to Probability and Random Variables |year=1960 |publisher=McGraw-Hill |location=New York  |page=[https://archive.org/details/introductiontopr0000wads/page/52 52] |url=https://archive.org/details/introductiontopr0000wads |url-access=registration }}</ref>
: <math>\begin{align}
<math display="block">\begin{align}
F(k;n,p) & = \Pr(X \le k) \\
F(k;n,p) & = \Pr(X \le k) \\
&= I_{1-p}(n-k, k+1) \\
&= I_{1-p}(n-k, k+1) \\
Line 72: Line 72:
\end{align}</math>
\end{align}</math>
which is equivalent to the  [[cumulative distribution function]]s of the [[beta distribution]] and of the [[F-distribution|{{mvar|F}}-distribution]]:<ref>{{cite journal |last=Jowett |first=G. H. |year=1963 |title=The Relationship Between the Binomial and F Distributions |journal=Journal of the Royal Statistical Society, Series D |volume=13 |issue=1 |pages=55–57 |doi=10.2307/2986663 |jstor=2986663 }}</ref>
which is equivalent to the  [[cumulative distribution function]]s of the [[beta distribution]] and of the [[F-distribution|{{mvar|F}}-distribution]]:<ref>{{cite journal |last=Jowett |first=G. H. |year=1963 |title=The Relationship Between the Binomial and F Distributions |journal=Journal of the Royal Statistical Society, Series D |volume=13 |issue=1 |pages=55–57 |doi=10.2307/2986663 |jstor=2986663 }}</ref>
: <math>F(k;n,p) = F_{\text{beta-distribution}}\left(x=1-p;\alpha=n-k,\beta=k+1\right)</math>
<math display="block">F(k;n,p) = F_{\text{beta-distribution}}\left(x=1-p;\alpha=n-k,\beta=k+1\right)</math>
: <math>F(k;n,p) = F_{F\text{-distribution}}\left(x=\frac{1-p}{p}\frac{k+1}{n-k};d_1=2(n-k),d_2=2(k+1)\right).</math>
<math display="block">F(k;n,p) = F_{F\text{-distribution}}\left(x=\frac{1-p}{p}\frac{k+1}{n-k};d_1=2(n-k),d_2=2(k+1)\right).</math>


Some closed-form bounds for the cumulative distribution function are given [[#Tail bounds|below]].
Some closed-form bounds for the cumulative distribution function are given [[#Tail bounds|below]].
Line 80: Line 80:
=== Expected value and variance ===
=== Expected value and variance ===


If {{math|''X'' ~ ''B''(''n'', ''p'')}}, that is, {{math|''X''}} is a binomially distributed random variable, {{mvar|n}} being the total number of experiments and ''p'' the probability of each experiment yielding a successful result, then the [[expected value]] of {{math|''X''}} is:<ref>See [https://proofwiki.org/wiki/Expectation_of_Binomial_Distribution Proof Wiki]</ref>
If {{math|''X'' ~ ''B''(''n'', ''p'')}}, that is, {{mvar|X}} is a binomially distributed random variable, {{mvar|n}} being the total number of experiments and {{mvar|p}} the probability of each experiment yielding a successful result, then the [[expected value]] of {{mvar|X}} is:<ref>See [https://proofwiki.org/wiki/Expectation_of_Binomial_Distribution Proof Wiki]</ref>
: <math> \operatorname{E}[X] = np.</math>
<math display="block"> \operatorname{E}[X] = np.</math>


This follows from the linearity of the expected value along with the fact that {{mvar|X}} is the sum of {{mvar|n}} identical Bernoulli random variables, each with expected value {{mvar|p}}.  In other words, if <math>X_1, \ldots, X_n</math> are identical (and independent) Bernoulli random variables with parameter {{mvar|p}}, then {{math|1=''X'' = ''X''<sub>1</sub> + ... + ''X''<sub>''n''</sub>}} and
This follows from the linearity of the expected value along with the fact that {{mvar|X}} is the sum of {{mvar|n}} identical Bernoulli random variables, each with expected value {{mvar|p}}.  In other words, if <math>X_1, \ldots, X_n</math> are identical (and independent) Bernoulli random variables with parameter {{mvar|p}}, then {{math|1=''X'' = ''X''{{sub|1}} + ... + ''X''{{sub|''n''}}}} and
: <math>\operatorname{E}[X] = \operatorname{E}[X_1 + \cdots + X_n] = \operatorname{E}[X_1] + \cdots + \operatorname{E}[X_n] = p + \cdots + p = np.</math>
<math display="block">\operatorname{E}[X] = \operatorname{E}[X_1 + \cdots + X_n] = \operatorname{E}[X_1] + \cdots + \operatorname{E}[X_n] = p + \cdots + p = np.</math>


The [[variance]] is:
The [[variance]] is:
: <math> \operatorname{Var}(X) = npq = np(1 - p).</math>
<math display="block"> \operatorname{Var}(X) = npq = np(1 - p).</math>


This similarly follows from the fact that the variance of a sum of independent random variables is the sum of the variances.
This similarly follows from the fact that the variance of a sum of independent random variables is the sum of the variances.
Line 94: Line 94:
<!-- Please stop changing the equation \mu_1 = 0, it is correct. The first central moment is not the mean. -->
<!-- Please stop changing the equation \mu_1 = 0, it is correct. The first central moment is not the mean. -->
The first 6 [[central moment]]s, defined as <math> \mu _{c}=\operatorname {E} \left[(X-\operatorname {E} [X])^{c}\right] </math>, are given by  
The first 6 [[central moment]]s, defined as <math> \mu _{c}=\operatorname {E} \left[(X-\operatorname {E} [X])^{c}\right] </math>, are given by  
: <math>\begin{align}
<math display="block">\begin{align}
\mu_1 &= 0, \\  
\mu_1 &= 0, \\  
\mu_2 &= np(1-p),\\
\mu_2 &= np \left(1-p\right), \\
\mu_3 &= np(1-p)(1-2p),\\
\mu_3 &= np \left(1-p\right) \left(1-2p\right), \\
\mu_4 &= np(1-p)(1+(3n-6)p(1-p)),\\
\mu_4 &= np \left(1-p\right) \left[1 + \left(3n-6\right) p \left(1-p\right)\right],\\
\mu_5 &= np(1-p)(1-2p)(1+(10n-12)p(1-p)),\\
\mu_5 &= np \left(1-p\right) \left(1-2p\right) \left[1 + \left(10n-12\right) p \left(1-p\right)\right],\\
\mu_6 &= np(1-p)(1-30p(1-p)(1-4p(1-p))+5np(1-p)(5-26p(1-p))+15n^2 p^2 (1-p)^2).
\mu_6 &= np \left(1-p\right) \left[1 - 30p\left(1-p\right)\left[1-4p(1-p)\right] + 5np \left(1-p\right)\left[5 - 26p\left(1-p\right)\right] + 15n^2 p^2 \left(1-p\right)^2\right].
\end{align}</math>
\end{align}</math>


The non-central moments satisfy
The non-central moments satisfy
: <math>\begin{align}
<math display="block">\begin{align}
\operatorname {E}[X] &= np, \\  
\operatorname {E}[X] &= np, \\  
\operatorname {E}[X^2] &= np(1-p)+n^2p^2,
\operatorname {E}[X^2] &= np(1-p)+n^2p^2,
\end{align}</math>
\end{align}</math>
and in general
and in general<ref name="Andreas2008">
<ref name="Andreas2008">
{{citation  
{{citation  
  |last1=Knoblauch |first1=Andreas  
  |last1=Knoblauch |first1=Andreas  
Line 136: Line 135:
|url-access=subscription  
|url-access=subscription  
  }}</ref>
  }}</ref>
: <math>
<math display="block">
\operatorname {E}[X^c] = \sum_{k=0}^c \left\{ {c \atop k} \right\} n^{\underline{k}} p^k,
\operatorname {E}[X^c] = \sum_{k=0}^c \left\{ {c \atop k} \right\} n^{\underline{k}} p^k,
</math>
</math>
where <math>\textstyle \left\{{c\atop k}\right\}</math> are the [[Stirling numbers of the second kind]], and <math>n^{\underline{k}} = n(n-1)\cdots(n-k+1)</math> is the <math>k</math>th [[Falling and rising factorials|falling power]] of <math>n</math>.
where <math display="inline"> \left\{{c\atop k}\right\}</math> are the [[Stirling numbers of the second kind]], and <math>n^{\underline{k}} = n(n-1)\cdots(n-k+1)</math> is the <math>k</math>-th [[Falling and rising factorials|falling power]] of <math>n</math>.
A simple bound
A simple bound
<ref>{{Citation |last1=D. Ahle |first1=Thomas |title=Sharp and Simple Bounds for the raw Moments of the Binomial and Poisson Distributions
<ref>{{Citation |last1=D. Ahle |first1=Thomas |title=Sharp and Simple Bounds for the raw Moments of the Binomial and Poisson Distributions
Line 146: Line 145:
|doi=10.1016/j.spl.2021.109306
|doi=10.1016/j.spl.2021.109306
|journal=Statistics & Probability Letters
|journal=Statistics & Probability Letters
|page=109306 |arxiv=2103.17027 }}</ref> follows by bounding the Binomial moments via the [[Poisson distribution#Higher moments|higher Poisson moments]]:  
|article-number=109306 |arxiv=2103.17027 }}</ref> follows by bounding the Binomial moments via the [[Poisson distribution#Higher moments|higher Poisson moments]]:  
: <math>
<math display="block">
\operatorname {E}[X^c] \le
\operatorname {E}[X^c] \le
\left(\frac{c}{\ln(c/(np)+1)}\right)^c \le (np)^c \exp\left(\frac{c^2}{2np}\right).
\left[\frac{c}{\ln\left(1+\frac{c}{np}\right)}\right]^c \le (np)^c \exp\left(\frac{c^2}{2np}\right).
</math>
</math>
This shows that if <math>c=O(\sqrt{np})</math>, then <math>\operatorname {E}[X^c]</math> is at most a constant factor away from <math>\operatorname {E}[X]^c</math>
This shows that if <math>c=O(\sqrt{np})</math>, then <math>\operatorname {E}[X^c]</math> is at most a constant factor away from <math>\operatorname {E}[X]^c</math>.
 
The [[moment-generating function]] is <math>M_X(t)=\mathbb E[e^{tX}] = (1-p+p e^t)^n</math>.


=== Mode ===
=== Mode ===


Usually the [[mode (statistics)|mode]] of a binomial {{math|''B''(''n'',''p'')}} distribution is equal to <math>\lfloor (n+1)p\rfloor</math>, where <math>\lfloor\cdot\rfloor</math> is the [[floor function]]. However, when {{math|(''n'' + 1)''p''}} is an integer and {{math|''p''}} is neither 0 nor 1, then the distribution has two modes: {{math|(''n'' + 1)''p''}} and {{math|(''n'' + 1)''p'' − 1}}. When {{math|''p''}} is equal to 0 or 1, the mode will be 0 and {{math|''n''}} correspondingly. These cases can be summarized as follows:
Usually the [[mode (statistics)|mode]] of a binomial {{math|''B''(''n'', ''p'')}} distribution is equal to <math>\lfloor (n+1)p\rfloor</math>, where <math>\lfloor\cdot\rfloor</math> is the [[floor function]]. However, when {{math|(''n'' + 1)''p''}} is an integer and {{mvar|p}} is neither 0 nor 1, then the distribution has two modes: {{math|(''n'' + 1)''p''}} and {{math|(''n'' + 1)''p'' − 1}}. When {{mvar|p}} is equal to 0 or 1, the mode will be 0 and {{mvar|n}} correspondingly. These cases can be summarized as follows:
: <math>\text{mode} =
<math display="block">\text{mode} =
       \begin{cases}
       \begin{cases}
         \lfloor (n+1)\,p\rfloor & \text{if }(n+1)p\text{ is 0 or a noninteger}, \\
         \lfloor (n+1)\,p\rfloor & \text{if }(n+1)p\text{ is 0 or a noninteger}, \\
Line 164: Line 165:


'''Proof:''' Let
'''Proof:''' Let
: <math>f(k)=\binom nk p^k q^{n-k}.</math>
<math display="block">f(k)=\binom nk p^k q^{n-k}.</math>


For <math>p=0</math> only <math>f(0)</math> has a nonzero value with <math>f(0)=1</math>. For <math>p=1</math> we find <math>f(n)=1</math> and <math>f(k)=0</math> for <math>k\neq n</math>. This proves that the mode is 0 for <math>p=0</math> and <math>n</math> for <math>p=1</math>.
For <math>p=0</math> only <math>f(0)</math> has a nonzero value with <math>f(0)=1</math>. For <math>p=1</math> we find <math>f(n)=1</math> and <math>f(k)=0</math> for <math>k\neq n</math>. This proves that the mode is 0 for <math>p=0</math> and <math>n</math> for <math>p=1</math>.


Let <math>0 < p < 1</math>. We find
Let <math>0 < p < 1</math>. We find
:<math>\frac{f(k+1)}{f(k)} = \frac{(n-k)p}{(k+1)(1-p)}</math>.
<math display="block">\frac{f(k+1)}{f(k)} = \frac{(n-k)p}{(k+1)(1-p)}.</math>


From this follows
From this follows
: <math>\begin{align}
<math display="block">\begin{align}
k > (n+1)p-1 \Rightarrow f(k+1) < f(k) \\
k > (n+1)p-1 \Rightarrow f(k+1) < f(k) \\
k = (n+1)p-1 \Rightarrow f(k+1) = f(k) \\
k = (n+1)p-1 \Rightarrow f(k+1) = f(k) \\
Line 183: Line 184:
In general, there is no single formula to find the [[median]] for a binomial distribution, and it may even be non-unique. However, several special results have been established:
In general, there is no single formula to find the [[median]] for a binomial distribution, and it may even be non-unique. However, several special results have been established:
* If {{math|''np''}} is an integer, then the mean, median, and mode coincide and equal {{math|''np''}}.<ref>{{cite journal|last=Neumann|first=P.|year=1966|title=Über den Median der Binomial- and Poissonverteilung|journal=Wissenschaftliche Zeitschrift der Technischen Universität Dresden|volume=19|pages=29–33|language=de}}</ref><ref>Lord, Nick. (July 2010). "Binomial averages when the mean is an integer", [[The Mathematical Gazette]] 94, 331-332.</ref>
* If {{math|''np''}} is an integer, then the mean, median, and mode coincide and equal {{math|''np''}}.<ref>{{cite journal|last=Neumann|first=P.|year=1966|title=Über den Median der Binomial- and Poissonverteilung|journal=Wissenschaftliche Zeitschrift der Technischen Universität Dresden|volume=19|pages=29–33|language=de}}</ref><ref>Lord, Nick. (July 2010). "Binomial averages when the mean is an integer", [[The Mathematical Gazette]] 94, 331-332.</ref>
* Any median {{math|''m''}} must lie within the interval <math>\lfloor np \rfloor\leq m \leq \lceil np \rceil</math>.<ref name="KaasBuhrman">{{cite journal|first1=R.|last1=Kaas|first2=J.M.|last2=Buhrman|title=Mean, Median and Mode in Binomial Distributions|journal=Statistica Neerlandica|year=1980|volume=34|issue=1|pages=13–18|doi=10.1111/j.1467-9574.1980.tb00681.x}}</ref>
* Any median {{mvar|m}} must lie within the interval <math>\lfloor np \rfloor\leq m \leq \lceil np \rceil</math>.<ref name="KaasBuhrman">{{cite journal|first1=R.|last1=Kaas|first2=J.M.|last2=Buhrman|title=Mean, Median and Mode in Binomial Distributions|journal=Statistica Neerlandica|year=1980|volume=34|issue=1|pages=13–18|doi=10.1111/j.1467-9574.1980.tb00681.x}}</ref>
* A median {{math|''m''}} cannot lie too far away from the mean:<math>|m-np|\leq \min\{{\ln2}, \max\{p,1-p\}\}</math> .<ref name="Hamza">
* A median {{mvar|m}} cannot lie too far away from the mean:<math>|m-np|\leq \min\{{\ln2}, \max\{p,1-p\}\}</math>.<ref name="Hamza">
{{cite journal
{{cite journal
  | last1 = Hamza | first1 = K.
  | last1 = Hamza | first1 = K.
Line 194: Line 195:
  | year = 1995
  | year = 1995
}}</ref>
}}</ref>
* The median is unique and equal to {{math|1=''m'' = [[Rounding|round]](''np'')}} when {{math|1={{abs|''m'' − ''np''}} ≤ min{{brace|''p'', 1 − ''p''}}}} (except for the case when {{math|1=''p'' = 1/2}} and {{math|''n''}} is odd).<ref name="KaasBuhrman"/>
* The median is unique and equal to {{math|1=''m'' = [[Rounding|round]](''np'')}} when {{math|1={{abs|''m'' − ''np''}} ≤ min{{brace|''p'', 1 − ''p''}}}} (except for the case when {{math|1=''p'' = 1/2}} and {{mvar|n}} is odd).<ref name="KaasBuhrman"/>
* When {{math|''p''}} is a rational number (with the exception of {{math|1=''p'' = 1/2}}\ and {{math|''n''}} odd) the median is unique.<ref name="Nowakowski">
* When {{mvar|p}} is a rational number (with the exception of {{math|1=''p'' = 1/2}} and {{mvar|n}} odd), the median is unique.<ref name="Nowakowski">
{{cite journal
{{cite journal
  | last1 = Nowakowski | first1 = Sz.
  | last1 = Nowakowski | first1 = Sz.
Line 209: Line 210:
  | s2cid = 215238991
  | s2cid = 215238991
}}</ref>
}}</ref>
* When <math>p= \frac{1}{2} </math> and {{math|''n''}} is odd, any number {{math|''m''}} in the interval <math> \frac{1}{2} \bigl(n-1\bigr)\leq m \leq  \frac{1}{2} \bigl(n+1\bigr)</math>  is a median of the binomial distribution. If <math>p= \frac{1}{2}  </math> and {{math|''n''}} is even, then  <math>m= \frac{n}{2} </math> is the unique median.
* When <math display="inline">p = \tfrac{1}{2} </math> and {{mvar|n}} is odd, any number {{mvar|m}} in the interval <math display="inline"> \frac{1}{2} \left(n-1\right) \leq m \leq  \frac{1}{2} \left(n+1\right)</math>  is a median of the binomial distribution. If <math display="inline">p = \tfrac{1}{2}  </math> and {{mvar|n}} is even, then  <math display="inline">m = \tfrac{n}{2} </math> is the unique median.


=== Tail bounds ===
=== Tail bounds ===
For {{math|''k'' ≤ ''np''}}, upper bounds can be derived for the lower tail of the cumulative distribution function <math>F(k;n,p) = \Pr(X \le k)</math>, the probability that there are at most {{math|''k''}} successes. Since <math>\Pr(X \ge k) = F(n-k;n,1-p) </math>, these bounds can also be seen as bounds for the upper tail of the cumulative distribution function for {{math|''k'' ≥ ''np''}}.
For {{math|''k'' ≤ ''np''}}, upper bounds can be derived for the lower tail of the cumulative distribution function <math>F(k;n,p) = \Pr(X \le k)</math>, the probability that there are at most {{mvar|k}} successes. Since <math>\Pr(X \ge k) = F(n-k;n,1-p) </math>, these bounds can also be seen as bounds for the upper tail of the cumulative distribution function for {{math|''k'' ≥ ''np''}}.


[[Hoeffding's inequality]] yields the simple bound
[[Hoeffding's inequality]] yields the simple bound
: <math> F(k;n,p) \leq \exp\left(-2 n\left(p-\frac{k}{n}\right)^2\right), \!</math>
<math display="block"> F(k;n,p) \leq \exp\left(-2 n\left(p-\frac{k}{n}\right)^2\right), \!</math>
which is however not very tight. In particular, for {{math|1=''p'' = 1}}, we have that {{math|1=''F''(''k''; ''n'', ''p'') = 0}} (for fixed {{math|''k''}}, {{math|''n''}} with {{math|''k'' < ''n''}}), but Hoeffding's bound evaluates to a positive constant.
which is however not very tight. In particular, for {{math|1=''p'' = 1}}, we have that {{math|1=''F''(''k''; ''n'', ''p'') = 0}} (for fixed {{mvar|k}}, {{mvar|n}} with {{math|''k'' < ''n''}}), but Hoeffding's bound evaluates to a positive constant.


A sharper bound can be obtained from the [[Chernoff bound]]:<ref name="ag">{{cite journal |first1=R. |last1=Arratia |first2=L. |last2=Gordon |title=Tutorial on large deviations for the binomial distribution |journal=Bulletin of Mathematical Biology |volume=51 |issue=1 |year=1989 |pages=125–131 |doi=10.1007/BF02458840 |pmid=2706397 |s2cid=189884382 }}</ref>
A sharper bound can be obtained from the [[Chernoff bound]]:<ref name="ag">{{cite journal |first1=R. |last1=Arratia |first2=L. |last2=Gordon |title=Tutorial on large deviations for the binomial distribution |journal=Bulletin of Mathematical Biology |volume=51 |issue=1 |year=1989 |pages=125–131 |doi=10.1007/BF02458840 |pmid=2706397 |s2cid=189884382 }}</ref>
: <math> F(k;n,p) \leq \exp\left(-nD\left(\frac{k}{n}\parallel p\right)\right)  </math>
<math display="block"> F(k;n,p) \leq \exp\left(-n D{\left(\frac{k}{n}\parallel p\right)}\right)  </math>
where {{math|''D''(''a'' ∥ ''p'')}} is the [[Kullback–Leibler divergence|relative entropy (or Kullback-Leibler divergence)]] between an {{math|''a''}}-coin and a {{math|''p''}}-coin (i.e. between the {{math|Bernoulli(''a'')}} and {{math|Bernoulli(''p'')}} distribution):
where {{math|''D''(''a'' ∥ ''p'')}} is the [[Kullback–Leibler divergence|relative entropy (or Kullback-Leibler divergence)]] between an {{mvar|a}}-coin and a {{mvar|p}}-coin (that is, between the {{math|Bernoulli(''a'')}} and {{math|Bernoulli(''p'')}} distribution):
: <math> D(a\parallel p)=(a)\ln\frac{a}{p}+(1-a)\ln\frac{1-a}{1-p}. \!</math>
<math display="block"> D(a\parallel p)=(a)\ln\frac{a}{p}+(1-a)\ln\frac{1-a}{1-p}. \!</math>


Asymptotically, this bound is reasonably tight; see <ref name="ag"/> for details.
Asymptotically, this bound is reasonably tight; see <ref name="ag"/> for details.


One can also obtain ''lower'' bounds on the tail {{math|''F''(''k''; ''n'', ''p'')}}, known as anti-concentration bounds. By approximating the binomial coefficient with [[Stirling's approximation|Stirling's formula]] it can be shown that<ref>{{cite book |author1=Robert B. Ash |title=Information Theory |url=https://archive.org/details/informationtheor00ashr |url-access=limited |date=1990 |publisher=Dover Publications |page=[https://archive.org/details/informationtheor00ashr/page/n81 115]|isbn=9780486665214 }}</ref>
One can also obtain ''lower'' bounds on the tail {{math|''F''(''k''; ''n'', ''p'')}}, known as anti-concentration bounds. By approximating the binomial coefficient with [[Stirling's approximation|Stirling's formula]] it can be shown that<ref>{{cite book |author-first1=Robert B. |author-last1=Ash |title=Information Theory |url=https://archive.org/details/informationtheor00ashr |url-access=limited |date=1990 |publisher=Dover Publications |page=[https://archive.org/details/informationtheor00ashr/page/n81 115]|isbn=9780486665214 }}</ref>
: <math> F(k;n,p) \geq \frac{1}{\sqrt{8n\tfrac{k}{n}(1-\tfrac{k}{n})}} \exp\left(-nD\left(\frac{k}{n}\parallel p\right)\right),</math>
<math display="block"> F(k;n,p) \geq \frac{1}{\sqrt{8n\tfrac{k}{n}(1-\tfrac{k}{n})}} \exp\left(-n D{\left(\frac{k}{n}\parallel p\right)}\right),</math>
which implies the simpler but looser bound
which implies the simpler but looser bound
: <math> F(k;n,p) \geq \frac1{\sqrt{2n}} \exp\left(-nD\left(\frac{k}{n}\parallel p\right)\right).</math>
<math display="block"> F(k;n,p) \geq \frac1{\sqrt{2n}} \exp\left(-nD\left(\frac{k}{n}\parallel p\right)\right).</math>


For {{math|1=''p'' = 1/2}} and {{math|''k'' ≥ 3''n''/8}} for even {{math|''n''}}, it is possible to make the denominator constant:<ref>{{cite web |last1=Matoušek |first1=J. |last2=Vondrak |first2=J. |title=The Probabilistic Method |work=lecture notes |url=https://www.cs.cmu.edu/afs/cs.cmu.edu/academic/class/15859-f09/www/handouts/matousek-vondrak-prob-ln.pdf |archive-url=https://ghostarchive.org/archive/20221009/https://www.cs.cmu.edu/afs/cs.cmu.edu/academic/class/15859-f09/www/handouts/matousek-vondrak-prob-ln.pdf |archive-date=2022-10-09 |url-status=live }}</ref>
For {{math|1=''p'' = 1/2}} and {{math|''k'' ≥ 3''n''/8}} for even {{mvar|n}}, it is possible to make the denominator constant:<ref>{{cite web |last1=Matoušek |first1=J. |last2=Vondrak |first2=J. |title=The Probabilistic Method |work=lecture notes |url=https://www.cs.cmu.edu/afs/cs.cmu.edu/academic/class/15859-f09/www/handouts/matousek-vondrak-prob-ln.pdf |archive-url=https://ghostarchive.org/archive/20221009/https://www.cs.cmu.edu/afs/cs.cmu.edu/academic/class/15859-f09/www/handouts/matousek-vondrak-prob-ln.pdf |archive-date=2022-10-09 |url-status=live }}</ref>
: <math> F(k;n,\tfrac{1}{2}) \geq \frac{1}{15} \exp\left(- 16n \left(\frac{1}{2} -\frac{k}{n}\right)^2\right). \!</math>
<math display="block"> F(k;n,\tfrac{1}{2}) \geq \frac{1}{15} \exp\left(- 16n \left(\frac{1}{2} -\frac{k}{n}\right)^2\right). \!</math>


== Statistical inference ==
== Statistical inference ==
Line 237: Line 238:
{{see also|Beta distribution#Bayesian inference}}
{{see also|Beta distribution#Bayesian inference}}


When {{math|''n''}} is known, the parameter {{math|''p''}} can be estimated using the proportion of successes:
When {{mvar|n}} is known, the parameter {{mvar|p}} can be estimated using the proportion of successes:
: <math> \widehat{p} = \frac{x}{n}.</math>
<math display="block"> \widehat{p} = \frac{x}{n}.</math>
This estimator is found using [[maximum likelihood estimator]] and also the [[method of moments (statistics)|method of moments]]. This estimator is [[Bias of an estimator|unbiased]] and uniformly with [[Minimum-variance unbiased estimator|minimum variance]], proven using [[Lehmann–Scheffé theorem]], since it is based on a [[minimal sufficient]] and [[Completeness (statistics)|complete]] statistic (i.e.: {{math|''x''}}). It is also [[Consistent estimator|consistent]] both in probability and in [[Mean squared error|MSE]]. This statistic is [[Asymptotic distribution|asymptotically]] [[normal distribution|normal]] thanks to the [[central limit theorem]], because it is the same as taking the [[arithmetic mean|mean]] over Bernoulli samples.  It has a variance of <math> var(\widehat{p}) = \frac{p(1-p)}{n}</math>, a property which is used in various ways, such as in [[Binomial_proportion_confidence_interval#Wald_interval|Wald's confidence intervals]].
This estimator is found using [[maximum likelihood estimator]] and also the [[method of moments (statistics)|method of moments]]. This estimator is [[Bias of an estimator|unbiased]] and uniformly with [[Minimum-variance unbiased estimator|minimum variance]], proven using [[Lehmann–Scheffé theorem]], since it is based on a [[minimal sufficient]] and [[Completeness (statistics)|complete]] statistic (that is, {{mvar|x}}). It is also [[Consistent estimator|consistent]] both in probability and in [[Mean squared error|MSE]]. This statistic is [[Asymptotic distribution|asymptotically]] [[normal distribution|normal]] thanks to the [[central limit theorem]], because it is the same as taking the [[arithmetic mean|mean]] over Bernoulli samples.  It has a variance of <math> \operatorname{Var}(\hat{p}) = \frac{p(1-p)}{n}</math>, a property which is used in various ways, such as in [[Binomial_proportion_confidence_interval#Wald_interval|Wald's confidence intervals]].


A closed form [[Bayes estimator]] for {{math|''p''}} also exists when using the [[Beta distribution]] as a [[Conjugate prior|conjugate]] [[prior distribution]]. When using a general <math>\operatorname{Beta}(\alpha, \beta)</math> as a prior, the [[Bayes estimator#Posterior mean|posterior mean]] estimator is:
A closed form [[Bayes estimator]] for {{mvar|p}} also exists when using the [[Beta distribution]] as a [[Conjugate prior|conjugate]] [[prior distribution]]. When using a general <math>\operatorname{Beta}(\alpha, \beta)</math> as a prior, the [[Bayes estimator#Posterior mean|posterior mean]] estimator is:
: <math> \widehat{p}_b = \frac{x+\alpha}{n+\alpha+\beta}.</math>
<math display="block"> \widehat{p}_b = \frac{x+\alpha}{n+\alpha+\beta}.</math>
The Bayes estimator is [[Asymptotic efficiency (Bayes)|asymptotically efficient]] and as the sample size approaches infinity ({{math|''n'' → ∞}}), it approaches the [[Maximum likelihood estimation|MLE]] solution.<ref>{{Cite journal |last=Wilcox |first=Rand R. |date=1979 |title=Estimating the Parameters of the Beta-Binomial Distribution |url=http://journals.sagepub.com/doi/10.1177/001316447903900302 |journal=Educational and Psychological Measurement |language=en |volume=39 |issue=3 |pages=527–535 |doi=10.1177/001316447903900302 |s2cid=121331083 |issn=0013-1644|url-access=subscription }}</ref> The Bayes estimator is [[Bias of an estimator|biased]] (how much depends on the priors),  [[Bayes estimator#Admissibility|admissible]] and [[Consistent estimator|consistent]] in probability. Using the Bayesian estimator with the Beta distribution can be used with [[Thompson sampling]].
The Bayes estimator is [[Asymptotic efficiency (Bayes)|asymptotically efficient]] and as the sample size approaches infinity ({{math|''n'' → ∞}}), it approaches the [[Maximum likelihood estimation|MLE]] solution.<ref>{{Cite journal |last=Wilcox |first=Rand R. |date=1979 |title=Estimating the Parameters of the Beta-Binomial Distribution |url=http://journals.sagepub.com/doi/10.1177/001316447903900302 |journal=Educational and Psychological Measurement |language=en |volume=39 |issue=3 |pages=527–535 |doi=10.1177/001316447903900302 |s2cid=121331083 |issn=0013-1644|url-access=subscription }}</ref> The Bayes estimator is [[Bias of an estimator|biased]] (how much depends on the priors),  [[Bayes estimator#Admissibility|admissible]] and [[Consistent estimator|consistent]] in probability. Using the Bayesian estimator with the Beta distribution can be used with [[Thompson sampling]].


For the special case of using the [[standard uniform distribution]] as a [[non-informative prior]], <math>\operatorname{Beta}(\alpha=1, \beta=1) = U(0,1)</math>, the posterior mean estimator becomes:
For the special case of using the [[standard uniform distribution]] as a [[non-informative prior]], <math>\operatorname{Beta}(\alpha{=}1,\, \beta{=}1) = U(0,1)</math>, the posterior mean estimator becomes:
:<math> \widehat{p}_b = \frac{x+1}{n+2}.</math>
<math display="block"> \widehat{p}_b = \frac{x+1}{n+2}.</math>
(A [[Bayes estimator#Posterior mode|posterior mode]] should just lead to the standard estimator.) This method is called the [[rule of succession]], which was introduced in the 18th century by [[Pierre-Simon Laplace]].
(A [[Bayes estimator#Posterior mode|posterior mode]] should just lead to the standard estimator.) This method is called the [[rule of succession]], which was introduced in the 18th century by [[Pierre-Simon Laplace]].


When relying on [[Jeffreys prior]], the prior is <math>\operatorname{Beta}(\alpha=\frac{1}{2}, \beta=\frac{1}{2})</math>,<ref>Marko Lalovic (https://stats.stackexchange.com/users/105848/marko-lalovic), Jeffreys prior for binomial likelihood, URL (version: 2019-03-04): https://stats.stackexchange.com/q/275608</ref> which leads to the estimator:
When relying on [[Jeffreys prior]], the prior is <math display="inline">\operatorname{Beta}(\alpha{=}\tfrac{1}{2}, \, \beta{=}\tfrac{1}{2})</math>,<ref>Marko Lalovic (https://stats.stackexchange.com/users/105848/marko-lalovic), Jeffreys prior for binomial likelihood, URL (version: 2019-03-04): https://stats.stackexchange.com/q/275608</ref> which leads to the estimator:
: <math> \widehat{p}_{Jeffreys} = \frac{x+\frac{1}{2}}{n+1}.</math>
<math display="block"> \widehat{p}_{\mathrm{Jeffreys}} = \frac{x+\frac{1}{2}}{n+1}.</math>


When estimating {{math|''p''}} with very rare events and a small {{math|''n''}} (e.g.: if {{math|1=''x'' = 0}}), then using the standard estimator leads to <math> \widehat{p} = 0,</math> which sometimes is unrealistic and undesirable. In such cases there are various alternative estimators.<ref>{{cite journal |last=Razzaghi |first=Mehdi |title=On the estimation of binomial success probability with zero occurrence in sample |journal=Journal of Modern Applied Statistical Methods |volume=1 |issue=2 |year=2002 |pages=326–332 |doi=10.22237/jmasm/1036110000 |doi-access=free }}</ref> One way is to use the Bayes estimator <math> \widehat{p}_b</math>, leading to:
When estimating {{mvar|p}} with very rare events and a small {{mvar|n}} (for example, if {{math|1=''x'' = 0}}), then using the standard estimator leads to <math> \widehat{p} = 0,</math> which sometimes is unrealistic and undesirable. In such cases there are various alternative estimators.<ref>{{cite journal |last=Razzaghi |first=Mehdi |title=On the estimation of binomial success probability with zero occurrence in sample |journal=Journal of Modern Applied Statistical Methods |volume=1 |issue=2 |year=2002 |pages=326–332 |article-number=jmasm.eP1673 |doi=10.22237/jmasm/1036110000 |doi-access=free }}</ref> One way is to use the Bayes estimator <math> \widehat{p}_b</math>, leading to:
: <math> \widehat{p}_b = \frac{1}{n+2}.</math>
<math display="block"> \widehat{p}_b = \frac{1}{n+2}.</math>
Another method is to use the upper bound of the [[confidence interval]] obtained using the [[Rule of three (statistics)|rule of three]]:
Another method is to use the upper bound of the [[confidence interval]] obtained using the [[Rule of three (statistics)|rule of three]]:
: <math> \widehat{p}_{\text{rule of 3}} = \frac{3}{n}.</math>
<math display="block"> \widehat{p}_{\text{rule of 3}} = \frac{3}{n}.</math>


=== Confidence intervals for the parameter p ===
=== Confidence intervals for the parameter p ===
Line 261: Line 262:
{{see also|Z-test#Comparing the Proportions of Two Binomials}}
{{see also|Z-test#Comparing the Proportions of Two Binomials}}


Even for quite large values of ''n'', the actual distribution of the mean is significantly nonnormal.<ref name=Brown2001>{{Citation |first1=Lawrence D. |last1=Brown |first2=T. Tony |last2=Cai |first3=Anirban |last3=DasGupta |year=2001 |title = Interval Estimation for a Binomial Proportion |url=http://www-stat.wharton.upenn.edu/~tcai/paper/html/Binomial-StatSci.html |journal=Statistical Science |volume=16 |issue=2 |pages=101–133 |access-date = 2015-01-05 |doi=10.1214/ss/1009213286|citeseerx=10.1.1.323.7752 }}</ref> Because of this problem several methods to estimate confidence intervals have been proposed.
Even for quite large values of {{mvar|n}}, the actual distribution of the mean is significantly nonnormal.<ref name=Brown2001>{{Citation |first1=Lawrence D. |last1=Brown |first2=T. Tony |last2=Cai |first3=Anirban |last3=DasGupta |year=2001 |title = Interval Estimation for a Binomial Proportion |url=http://www-stat.wharton.upenn.edu/~tcai/paper/html/Binomial-StatSci.html |journal=Statistical Science |volume=16 |issue=2 |pages=101–133 |access-date = 2015-01-05 |doi=10.1214/ss/1009213286|citeseerx=10.1.1.323.7752 }}</ref> Because of this problem several methods to estimate confidence intervals have been proposed.


In the equations for confidence intervals below, the variables have the following meaning:
In the equations for confidence intervals below, the variables have the following meaning:
* ''n''<sub>1</sub> is the number of successes out of ''n'', the total number of trials
* {{math|''n''{{sub|1}}}} is the number of successes out of {{mvar|n}}, the total number of trials
* <math> \widehat{p\,} = \frac{n_1}{n}</math> is the proportion of successes
* <math> \widehat{p\,} = \frac{n_1}{n}</math> is the proportion of successes
* <math>z</math> is the <math>1 - \tfrac{1}{2}\alpha</math> [[quantile]] of a [[standard normal distribution]] (i.e., [[probit]]) corresponding to the target error rate <math>\alpha</math>. For example, for a 95% [[confidence level]] the error <math>\alpha</math>&nbsp;=&nbsp;0.05, so <math>1 - \tfrac{1}{2}\alpha</math>&nbsp;=&nbsp;0.975 and <math>z</math>&nbsp;=&nbsp;1.96.
* <math>z</math> is the <math>1 - \tfrac{1}{2}\alpha</math> [[quantile]] of a [[standard normal distribution]] (that is, [[probit]]) corresponding to the target error rate <math>\alpha</math>. For example, for a 95% [[confidence level]] the error <math>\alpha=0.05</math>, so <math>1 - \tfrac{1}{2}\alpha=0.975</math> and <math>z=1.96</math>.


==== Wald method ====
==== Wald method ====
{{Main|Binomial proportion confidence interval#Wald interval}}
{{Main|Binomial proportion confidence interval#Wald interval}}
: <math> \widehat{p\,} \pm z \sqrt{ \frac{ \widehat{p\,} ( 1 -\widehat{p\,} )}{ n } } .</math>
<math display="block"> \widehat{p\,} \pm z \sqrt{ \frac{ \widehat{p\,} ( 1 -\widehat{p\,} )}{ n } } .</math>


A [[continuity correction]] of {{math|0.5/''n''}} may be added.{{clarify|date=July 2012}}
A [[continuity correction]] of {{math|0.5 / ''n''}} may be added.{{clarify|date=July 2012}}


==== Agresti–Coull method ====
==== Agresti–Coull method ====
{{Main|Binomial proportion confidence interval#Agresti–Coull interval}}
{{Main|Binomial proportion confidence interval#Agresti–Coull interval}}
<ref name=Agresti1988>{{Citation |last1=Agresti |first1=Alan |last2=Coull |first2=Brent A. |date=May 1998 |title=Approximate is better than 'exact' for interval estimation of binomial proportions |url = http://www.stat.ufl.edu/~aa/articles/agresti_coull_1998.pdf |journal=The American Statistician |volume=52 |issue=2 |pages=119–126 |access-date=2015-01-05 |doi=10.2307/2685469 |jstor=2685469 }}</ref>
<ref name=Agresti1988>{{Citation |last1=Agresti |first1=Alan |last2=Coull |first2=Brent A. |date=May 1998 |title=Approximate is better than 'exact' for interval estimation of binomial proportions |url = http://www.stat.ufl.edu/~aa/articles/agresti_coull_1998.pdf |journal=The American Statistician |volume=52 |issue=2 |pages=119–126 |access-date=2015-01-05 |doi=10.2307/2685469 |jstor=2685469 }}</ref>
: <math> \tilde{p} \pm z \sqrt{ \frac{ \tilde{p} ( 1 - \tilde{p} )}{ n + z^2 } }</math>
<math display="block"> \tilde{p} \pm z \sqrt{ \frac{ \tilde{p} ( 1 - \tilde{p} )}{ n + z^2 } }</math>


Here the estimate of {{math|''p''}} is modified to
Here the estimate of {{mvar|p}} is modified to
: <math> \tilde{p}= \frac{ n_1 + \frac{1}{2} z^2}{ n + z^2 } </math>
<math display="block"> \tilde{p}= \frac{ n_1 + \frac{1}{2} z^2}{ n + z^2 } </math>


This method works well for {{math|''n'' > 10}} and {{math|''n''<sub>1</sub> ≠ 0, ''n''}}.<ref>{{cite web|last1=Gulotta|first1=Joseph|title=Agresti-Coull Interval Method|url=https://pellucid.atlassian.net/wiki/spaces/PEL/pages/25722894/Agresti-Coull+Interval+Method#:~:text=The%20Agresti%2DCoull%20Interval%20Method,%2C%20or%20per%20100%2C000%2C%20etc|website=pellucid.atlassian.net|access-date=18 May 2021}}</ref> See here for <math>n\leq 10</math>.<ref>{{cite web|title=Confidence intervals|url=https://www.itl.nist.gov/div898/handbook/prc/section2/prc241.htm|website=itl.nist.gov|access-date=18 May 2021}}</ref> For {{math|1=''n''<sub>1</sub> = 0, ''n''}} use the Wilson (score) method below.
This method works well for {{math|''n'' > 10}} and {{math|''n''{{sub|1}} ≠ 0, ''n''}}.<ref>{{cite web|last1=Gulotta|first1=Joseph|title=Agresti-Coull Interval Method|url=https://pellucid.atlassian.net/wiki/spaces/PEL/pages/25722894/Agresti-Coull+Interval+Method#:~:text=The%20Agresti%2DCoull%20Interval%20Method,%2C%20or%20per%20100%2C000%2C%20etc|website=pellucid.atlassian.net|access-date=18 May 2021}}</ref> See here for <math>n\leq 10</math>.<ref>{{cite web|title=Confidence intervals|url=https://www.itl.nist.gov/div898/handbook/prc/section2/prc241.htm|website=itl.nist.gov|access-date=18 May 2021}}</ref> For {{math|1=''n''{{sub|1}} = 0, ''n''}} use the Wilson (score) method below.


==== Arcsine method ====
==== Arcsine method ====
{{Main|Binomial proportion confidence interval#Arcsine transformation}}
{{Main|Binomial proportion confidence interval#Arcsine transformation}}
<ref name="Pires00">{{cite book |last=Pires |first=M. A. |chapter-url=https://www.math.tecnico.ulisboa.pt/~apires/PDFs/AP_COMPSTAT02.pdf |archive-url=https://ghostarchive.org/archive/20221009/https://www.math.tecnico.ulisboa.pt/~apires/PDFs/AP_COMPSTAT02.pdf |archive-date=2022-10-09 |url-status=live |chapter=Confidence intervals for a binomial proportion: comparison of methods and software evaluation |editor-last=Klinke |editor-first=S. |editor2-last=Ahrend |editor2-first=P. |editor3-last=Richter |editor3-first=L. |title=Proceedings of the Conference CompStat 2002 |others=Short Communications and Posters |year=2002 }}</ref>
<ref name="Pires00">{{cite book |last=Pires |first=M. A. |chapter-url=https://www.math.tecnico.ulisboa.pt/~apires/PDFs/AP_COMPSTAT02.pdf |archive-url=https://ghostarchive.org/archive/20221009/https://www.math.tecnico.ulisboa.pt/~apires/PDFs/AP_COMPSTAT02.pdf |archive-date=2022-10-09 |url-status=live |chapter=Confidence intervals for a binomial proportion: comparison of methods and software evaluation |editor-last=Klinke |editor-first=S. |editor2-last=Ahrend |editor2-first=P. |editor3-last=Richter |editor3-first=L. |title=Proceedings of the Conference CompStat 2002 |others=Short Communications and Posters |year=2002 }}</ref>
: <math>\sin^2 \left(\arcsin \left(\sqrt{\widehat{p\,}}\right) \pm \frac{z}{2\sqrt{n}} \right).</math>
<math display="block">\sin^2 \left(\arcsin \left(\sqrt{\hat{p}}\right) \pm \frac{z}{2\sqrt{n}} \right).</math>


==== Wilson (score) method ====
==== Wilson (score) method ====
Line 293: Line 294:


The notation in the formula below differs from the previous formulas in two respects:<ref name="Wilson1927">{{Citation |last = Wilson |first=Edwin B. |date = June 1927 |title = Probable inference, the law of succession, and statistical inference |url = http://psych.stanford.edu/~jlm/pdfs/Wison27SingleProportion.pdf |journal = Journal of the American Statistical Association |volume=22 |issue=158 |pages=209–212 |access-date= 2015-01-05 |doi = 10.2307/2276774 |url-status=dead |archive-url = https://web.archive.org/web/20150113082307/http://psych.stanford.edu/~jlm/pdfs/Wison27SingleProportion.pdf |archive-date = 2015-01-13 |jstor = 2276774 }}</ref>
The notation in the formula below differs from the previous formulas in two respects:<ref name="Wilson1927">{{Citation |last = Wilson |first=Edwin B. |date = June 1927 |title = Probable inference, the law of succession, and statistical inference |url = http://psych.stanford.edu/~jlm/pdfs/Wison27SingleProportion.pdf |journal = Journal of the American Statistical Association |volume=22 |issue=158 |pages=209–212 |access-date= 2015-01-05 |doi = 10.2307/2276774 |url-status=dead |archive-url = https://web.archive.org/web/20150113082307/http://psych.stanford.edu/~jlm/pdfs/Wison27SingleProportion.pdf |archive-date = 2015-01-13 |jstor = 2276774 }}</ref>
* Firstly, {{math|''z''<sub>''x''</sub>}} has a slightly different interpretation in the formula below: it has its ordinary meaning of 'the {{math|''x''}}th quantile of the standard normal distribution', rather than being a shorthand for 'the {{math|(1 − ''x'')}}th quantile'.
* Firstly, {{math|''z''{{sub|''x''}}}} has a slightly different interpretation in the formula below: it has its ordinary meaning of 'the {{mvar|x}}th quantile of the standard normal distribution', rather than being a shorthand for 'the {{math|(1 − ''x'')}}th quantile'.
* Secondly, this formula does not use a plus-minus to define the two bounds. Instead, one may use <math>z = z_{\alpha / 2}</math> to get the lower bound, or use <math>z = z_{1 - \alpha/2}</math> to get the upper bound. For example: for a 95% confidence level the error <math>\alpha</math>&nbsp;=&nbsp;0.05, so one gets the lower bound by using <math>z = z_{\alpha/2} = z_{0.025} = - 1.96</math>, and one gets the upper bound by using <math>z = z_{1 - \alpha/2} = z_{0.975} = 1.96</math>.
* Secondly, this formula does not use a plus-minus to define the two bounds. Instead, one may use <math>z = z_{\alpha / 2}</math> to get the lower bound, or use <math>z = z_{1 - \alpha/2}</math> to get the upper bound. For example: for a 95% confidence level the error <math>\alpha=0.05</math>, so one gets the lower bound by using <math>z = z_{\alpha/2} = z_{0.025} = - 1.96</math>, and one gets the upper bound by using <math>z = z_{1 - \alpha/2} = z_{0.975} = 1.96</math>.
: <math>\frac{
<math display="block">\frac{
     \widehat{p\,} + \frac{z^2}{2n} + z
     \hat{p} + \frac{z^2}{2n} + z
     \sqrt{
     \sqrt{
         \frac{\widehat{p\,}(1 - \widehat{p\,})}{n} +
         \frac{\hat{p} \left(1 - \hat{p}\right)}{n} +
         \frac{z^2}{4 n^2}
         \frac{z^2}{4 n^2}
     }
     }
Line 321: Line 322:


=== Sums of binomials ===
=== Sums of binomials ===
If {{math|''X'' ~ B(''n'', ''p'')}} and {{math|''Y'' ~ B(''m'', ''p'')}} are independent binomial variables with the same probability {{math|''p''}}, then {{math|''X'' + ''Y''}} is again a binomial variable; its distribution is {{math|1=''Z'' = ''X'' + ''Y'' ~ B(''n'' + ''m'', ''p'')}}:<ref>{{cite book |last1=Dekking |first1=F.M. |last2=Kraaikamp |first2=C. |last3=Lopohaa |first3=H.P. |last4=Meester |first4=L.E. |title=A Modern Introduction of Probability and Statistics |date=2005 |publisher=Springer-Verlag London |isbn=978-1-84628-168-6 |edition=1 |url=https://www.springer.com/gp/book/9781852338961}}</ref>
If {{math|''X'' ~ B(''n'', ''p'')}} and {{math|''Y'' ~ B(''m'', ''p'')}} are independent binomial variables with the same probability {{mvar|p}}, then {{math|''X'' + ''Y''}} is again a binomial variable; its distribution is {{math|1=''Z'' = ''X'' + ''Y'' ~ B(''n'' + ''m'', ''p'')}}:<ref>{{cite book |last1=Dekking |first1=F.M. |last2=Kraaikamp |first2=C. |last3=Lopohaa |first3=H.P. |last4=Meester |first4=L.E. |title=A Modern Introduction of Probability and Statistics |date=2005 |publisher=Springer-Verlag London |isbn=978-1-84628-168-6 |edition=1 |url=https://www.springer.com/gp/book/9781852338961}}</ref>
: <math>\begin{align}
<math display="block">\begin{align}
   \operatorname P(Z=k) &= \sum_{i=0}^k\left[\binom{n}i p^i (1-p)^{n-i}\right]\left[\binom{m}{k-i} p^{k-i} (1-p)^{m-k+i}\right]\\
   \operatorname P(Z=k) &= \sum_{i=0}^k\left[\binom{n}i p^i (1-p)^{n-i}\right]\left[\binom{m}{k-i} p^{k-i} (1-p)^{m-k+i}\right]\\
                       &= \binom{n+m}k p^k (1-p)^{n+m-k}
                       &= \binom{n+m}k p^k (1-p)^{n+m-k}
\end{align}</math>
\end{align}</math>


A Binomial distributed random variable {{math|''X'' ~ B(''n'', ''p'')}} can be considered as the sum of {{math|''n''}} Bernoulli distributed random variables. So the sum of two Binomial distributed random variables {{math|''X'' ~ B(''n'', ''p'')}} and {{math|''Y'' ~ B(''m'', ''p'')}} is equivalent to the sum of {{math|''n'' + ''m''}} Bernoulli distributed random variables, which means {{math|1=''Z'' = ''X'' + ''Y'' ~ B(''n'' + ''m'', ''p'')}}. This can also be proven directly using the addition rule.
A Binomial distributed random variable {{math|''X'' ~ B(''n'', ''p'')}} can be considered as the sum of {{mvar|n}} Bernoulli distributed random variables. So the sum of two Binomial distributed random variables {{math|''X'' ~ B(''n'', ''p'')}} and {{math|''Y'' ~ B(''m'', ''p'')}} is equivalent to the sum of {{math|''n'' + ''m''}} Bernoulli distributed random variables, which means {{math|1=''Z'' = ''X'' + ''Y'' ~ B(''n'' + ''m'', ''p'')}}. This can also be proven directly using the addition rule.


However, if {{math|''X''}} and {{math|''Y''}} do not have the same probability {{math|''p''}}, then the variance of the sum will be [[Binomial sum variance inequality|smaller than the variance of a binomial variable]] distributed as {{math|B(''n'' + ''m'', {{overline|''p''}})}}.
However, if {{mvar|X}} and {{mvar|Y}} do not have the same probability {{mvar|p}}, then the variance of the sum will be [[Binomial sum variance inequality|smaller than the variance of a binomial variable]] distributed as {{math|B(''n'' + ''m'', {{overline|''p''}})}}.


=== Poisson binomial distribution ===
=== Poisson binomial distribution ===
The binomial distribution is a special case of the [[Poisson binomial distribution]], which is the distribution of a sum of {{math|''n''}} independent non-identical [[Bernoulli trials]] {{math|B(''p''<sub>''i''</sub>)}}.<ref>
The binomial distribution is a special case of the [[Poisson binomial distribution]], which is the distribution of a sum of {{mvar|n}} independent non-identical [[Bernoulli trials]] {{math|B(''p''{{sub|''i''}})}}.<ref>
{{cite journal
{{cite journal
  | volume = 3
  | volume = 3
Line 351: Line 352:
=== Ratio of two binomial distributions ===
=== Ratio of two binomial distributions ===


This result was first derived by Katz and coauthors in 1978.<ref name=Katz1978>{{cite journal |last1=Katz |first1=D. |display-authors=1 |first2=J. |last2=Baptista |first3=S. P. |last3=Azen |first4=M. C. |last4=Pike |year=1978 |title=Obtaining confidence intervals for the risk ratio in cohort studies |journal=Biometrics |volume=34 |issue=3 |pages=469–474 |doi=10.2307/2530610 |jstor=2530610 }}</ref>
This result was first derived by Katz and coauthors in 1978.<ref name="Katz1978">{{cite journal |last1=Katz |first1=D. |last2=Baptista |first2=J. |last3=Azen |first3=S. P. |last4=Pike |first4=M. C. |display-authors=1 |year=1978 |title=Obtaining confidence intervals for the risk ratio in cohort studies |journal=Biometrics |volume=34 |issue=3 |pages=469–474 |doi=10.2307/2530610 |jstor=2530610}}</ref>


Let {{nowrap|''X'' ~ B(''n'', ''p''<sub>1</sub>)}} and {{nowrap|''Y'' ~ B(''m'', ''p''<sub>2</sub>)}} be independent. Let {{nowrap|1=''T'' = (''X''/''n'') / (''Y''/''m'')}}.
Let {{math|''X'' ~ B(''n'', ''p''{{sub|1}})}} and {{math|''Y'' ~ B(''m'', ''p''{{sub|2}})}} be independent. Let {{math|1=''T'' = (''X''/''n'') / (''Y''/''m'')}}.


Then log(''T'') is approximately normally distributed with mean log(''p''<sub>1</sub>/''p''<sub>2</sub>) and variance {{nowrap|((1/''p''<sub>1</sub>) − 1)/''n'' + ((1/''p''<sub>2</sub>) − 1)/''m''}}.
Then {{math|log(''T'')}} is approximately normally distributed with mean {{math|log(''p''{{sub|1}}/''p''{{sub|2}})}} and variance {{math|((1/''p''{{sub|1}}) − 1)/''n'' + ((1/''p''{{sub|2}}) − 1)/''m''}}.


=== Conditional binomials ===
=== Conditional binomials ===
If ''X''&nbsp;~&nbsp;B(''n'',&nbsp;''p'') and ''Y''&nbsp;|&nbsp;''X''&nbsp;~&nbsp;B(''X'',&nbsp;''q'') (the conditional distribution of ''Y'', given&nbsp;''X''), then ''Y'' is a simple binomial random variable with distribution ''Y''&nbsp;~&nbsp;B(''n'',&nbsp;''pq'').
If {{math|''X'' ~ B(''n'', ''p'')}} and {{math|''Y'' {{!}} ''X'' ~ B(''X'', ''q'')}} (the conditional distribution of {{mvar|Y}}, given&nbsp;{{mvar|X}}), then {{mvar|Y}} is a simple binomial random variable with distribution {{math|''Y'' ~ B(''n'', ''pq'')}}.


For example, imagine throwing ''n'' balls to a basket ''U<sub>X</sub>'' and taking the balls that hit and throwing them to another basket ''U<sub>Y</sub>''. If ''p'' is the probability to hit ''U<sub>X</sub>'' then ''X''&nbsp;~&nbsp;B(''n'',&nbsp;''p'') is the number of balls that hit ''U<sub>X</sub>''. If ''q'' is the probability to hit ''U<sub>Y</sub>'' then the number of balls that hit ''U<sub>Y</sub>'' is ''Y''&nbsp;~&nbsp;B(''X'',&nbsp;''q'') and therefore ''Y''&nbsp;~&nbsp;B(''n'',&nbsp;''pq'').
For example, imagine throwing {{mvar|n}} balls to a basket {{math|''U''{{sub|''X''}}}} and taking the balls that hit and throwing them to another basket {{math|''U''{{sub|''Y''}}}}. If {{mvar|p}} is the probability to hit {{math|''U''{{sub|''X''}}}} then {{math|''X'' ~ B(''n'', ''p'')}} is the number of balls that hit {{math|''U''{{sub|''X''}}}}. If {{mvar|q}} is the probability to hit {{math|''U''{{sub|''Y''}}}} then the number of balls that hit {{math|''U''{{sub|''Y''}}}} is {{math|''Y'' ~ B(''X'', ''q'')}} and therefore {{math|''Y'' ~ B(''n'', ''pq'')}}.


{{hidden begin|style=width:60%|ta1=center|border=1px #aaa solid|title=[Proof]}}
{{hidden begin|style=width:60%|ta1=center|border=1px #aaa solid|title=[Proof]}}
Since <math> X \sim B(n, p) </math> and <math> Y \sim B(X, q) </math>, by the [[law of total probability]],
Since <math> X \sim \mathrm{B}(n, p) </math> and <math> Y \sim \mathrm{B}(X, q) </math>, by the [[law of total probability]],
: <math>\begin{align}
<math display="block">\begin{align}
   \Pr[Y = m] &= \sum_{k = m}^{n} \Pr[Y = m \mid X = k] \Pr[X = k] \\[2pt]
   \Pr[Y = m] &= \sum_{k = m}^{n} \Pr[Y = m \mid X = k] \Pr[X = k] \\[2pt]
   &= \sum_{k=m}^n \binom{n}{k} \binom{k}{m} p^k q^m (1-p)^{n-k} (1-q)^{k-m}
   &= \sum_{k=m}^n \binom{n}{k} \binom{k}{m} p^k q^m (1-p)^{n-k} (1-q)^{k-m}
  \end{align}</math>
  \end{align}</math>
Since <math>\tbinom{n}{k} \tbinom{k}{m} = \tbinom{n}{m} \tbinom{n-m}{k-m},</math> the equation above can be expressed as
Since <math>\tbinom{n}{k} \tbinom{k}{m} = \tbinom{n}{m} \tbinom{n-m}{k-m},</math> the equation above can be expressed as
: <math> \Pr[Y = m] = \sum_{k=m}^{n} \binom{n}{m} \binom{n-m}{k-m} p^k q^m (1-p)^{n-k} (1-q)^{k-m} </math>
<math display="block"> \Pr[Y = m] = \sum_{k=m}^{n} \binom{n}{m} \binom{n-m}{k-m} p^k q^m (1-p)^{n-k} (1-q)^{k-m} </math>
Factoring <math> p^k = p^m p^{k-m} </math> and pulling all the terms that don't depend on <math> k </math> out of the sum now yields
Factoring <math> p^k = p^m p^{k-m} </math> and pulling all the terms that don't depend on <math> k </math> out of the sum now yields
: <math>\begin{align}
<math display="block">\begin{align}
   \Pr[Y = m] &= \binom{n}{m} p^m q^m \left( \sum_{k=m}^n \binom{n-m}{k-m} p^{k-m} (1-p)^{n-k} (1-q)^{k-m} \right) \\[2pt]
   \Pr[Y = m] &= \binom{n}{m} p^m q^m \left( \sum_{k=m}^n \binom{n-m}{k-m} p^{k-m} (1-p)^{n-k} (1-q)^{k-m} \right) \\[2pt]
   &= \binom{n}{m} (pq)^m \left( \sum_{k=m}^n \binom{n-m}{k-m} \left(p(1-q)\right)^{k-m} (1-p)^{n-k}  \right)
   &= \binom{n}{m} (pq)^m \left( \sum_{k=m}^n \binom{n-m}{k-m} \left(p(1-q)\right)^{k-m} (1-p)^{n-k}  \right)
  \end{align}</math>
  \end{align}</math>
After substituting <math> i = k - m </math> in the expression above, we get
After substituting <math> i = k - m </math> in the expression above, we get
: <math> \Pr[Y = m] = \binom{n}{m} (pq)^m \left( \sum_{i=0}^{n-m} \binom{n-m}{i} (p - pq)^i (1-p)^{n-m - i} \right) </math>
<math display="block"> \Pr[Y = m] = \binom{n}{m} (pq)^m \left( \sum_{i=0}^{n-m} \binom{n-m}{i} (p - pq)^i (1-p)^{n-m - i} \right) </math>
Notice that the sum (in the parentheses) above equals <math> (p - pq + 1 - p)^{n-m} </math> by the [[binomial theorem]]. Substituting this in finally yields
Notice that the sum (in the parentheses) above equals <math> (p - pq + 1 - p)^{n-m} </math> by the [[binomial theorem]]. Substituting this in finally yields
: <math>\begin{align}
<math display="block">\begin{align}
   \Pr[Y=m] &=  \binom{n}{m} (pq)^m (p - pq + 1 - p)^{n-m}\\[4pt]
   \Pr[Y=m] &=  \binom{n}{m} (pq)^m (p - pq + 1 - p)^{n-m}\\[4pt]
   &= \binom{n}{m} (pq)^m (1-pq)^{n-m}
   &= \binom{n}{m} (pq)^m (1-pq)^{n-m}
  \end{align}</math>
  \end{align}</math>
and thus <math> Y \sim B(n, pq) </math> as desired.
and thus <math> Y \sim \mathrm{B}(n, pq) </math> as desired.
{{hidden end}}
{{hidden end}}


=== Bernoulli distribution ===
=== Bernoulli distribution ===
The [[Bernoulli distribution]] is a special case of the binomial distribution, where {{math|1=''n'' = 1}}. Symbolically, {{math|''X'' ~ B(1, ''p'')}} has the same meaning as {{math|''X'' ~ Bernoulli(''p'')}}. Conversely, any binomial distribution, {{math|B(''n'', ''p'')}}, is the distribution of the sum of {{math|''n''}} independent [[Bernoulli trials]], {{math|Bernoulli(''p'')}}, each with the same probability {{math|''p''}}.<ref>{{cite web|last1=Taboga|first1=Marco|title=Lectures on Probability Theory and Mathematical Statistics|url=https://www.statlect.com/probability-distributions/binomial-distribution#hid3|website=statlect.com|access-date=18 December 2017}}</ref>
The [[Bernoulli distribution]] is a special case of the binomial distribution, where {{math|1=''n'' = 1}}. Symbolically, {{math|''X'' ~ B(1, ''p'')}} has the same meaning as {{math|''X'' ~ Bernoulli(''p'')}}. Conversely, any binomial distribution, {{math|B(''n'', ''p'')}}, is the distribution of the sum of {{mvar|n}} independent [[Bernoulli trials]], {{math|Bernoulli(''p'')}}, each with the same probability {{mvar|p}}.<ref>{{cite web|last1=Taboga|first1=Marco|title=Lectures on Probability Theory and Mathematical Statistics|url=https://www.statlect.com/probability-distributions/binomial-distribution#hid3|website=statlect.com|access-date=18 December 2017}}</ref>


=== Normal approximation ===
=== Normal approximation ===
Line 393: Line 394:
[[File:Binomial Distribution.svg|right|250px|thumb|Binomial [[probability mass function]] and normal [[probability density function]] approximation for {{math|1=''n'' = 6}} and {{math|1=''p'' = 0.5}}]]
[[File:Binomial Distribution.svg|right|250px|thumb|Binomial [[probability mass function]] and normal [[probability density function]] approximation for {{math|1=''n'' = 6}} and {{math|1=''p'' = 0.5}}]]


If {{math|''n''}} is large enough, then the skew of the distribution is not too great. In this case a reasonable approximation to {{math|B(''n'', ''p'')}} is given by the [[normal distribution]]
If {{mvar|n}} is large enough, then the skew of the distribution is not too great. In this case a reasonable approximation to {{math|B(''n'', ''p'')}} is given by the [[normal distribution]]
: <math> \mathcal{N}(np,\,np(1-p)),</math>
<math display="block"> \mathcal{N}(np,\,np(1-p)),</math>
and this basic approximation can be improved in a simple way by using a suitable [[continuity correction]].
and this basic approximation can be improved in a simple way by using a suitable [[continuity correction]].
The basic approximation generally improves as {{math|''n''}} increases (at least 20) and is better when {{math|''p''}} is not near to 0 or 1.<ref name="bhh">{{cite book|title=Statistics for experimenters|url=https://archive.org/details/statisticsforexp00geor|url-access=registration|author=Box, Hunter and Hunter|publisher=Wiley|year=1978|page=[https://archive.org/details/statisticsforexp00geor/page/130 130]|isbn=9780471093152}}</ref> Various [[Rule of thumb|rules of thumb]] may be used to decide whether {{math|''n''}} is large enough, and {{math|''p''}} is far enough from the extremes of zero or one:
The basic approximation generally improves as {{mvar|n}} increases (at least 20) and is better when {{mvar|p}} is not near to 0 or 1.<ref name="bhh">{{cite book |author=Box |first=George E.P. |author-link=George E. P. Box |url=https://archive.org/details/statisticsforexp00geor |title=Statistics for Experimenters: Design, Innovation, and Discovery |last2=Hunter |first2=William Gordon |author-link2=William Hunter (statistician) |last3=Hunter |first3=J. Stuart |publisher=Wiley |year=1978 |isbn=9780471093152 |page=[https://archive.org/details/statisticsforexp00geor/page/130 130] |url-access=registration}}</ref> Various [[Rule of thumb|rules of thumb]] may be used to decide whether {{mvar|n}} is large enough, and {{mvar|p}} is far enough from the extremes of zero or one:
* One rule<ref name="bhh"/> is that for {{math|''n'' > 5}} the normal approximation is adequate if the absolute value of the skewness is strictly less than 0.3; that is, if
* One rule<ref name="bhh"/> is that for {{math|''n'' > 5}} the normal approximation is adequate if the absolute value of the skewness is strictly less than 0.3; that is, if <math display="block">\frac{|1-2p|}{\sqrt{np(1-p)}}=\frac1{\sqrt{n}}\left|\sqrt{\frac{1-p}p}-\sqrt{\frac{p}{1-p}}\,\right|<0.3.</math>
*: <math>\frac{|1-2p|}{\sqrt{np(1-p)}}=\frac1{\sqrt{n}}\left|\sqrt{\frac{1-p}p}-\sqrt{\frac{p}{1-p}}\,\right|<0.3.</math>
This can be made precise using the [[Berry–Esseen theorem]].
This can be made precise using the [[Berry–Esseen theorem]].
* A stronger rule states that the normal approximation is appropriate only if everything within 3 standard deviations of its mean is within the range of possible values; that is, only if
* A stronger rule states that the normal approximation is appropriate only if everything within 3 standard deviations of its mean is within the range of possible values; that is, only if <math display="block"> \mu \pm 3\sigma = n p \pm 3 \sqrt{np(1-p)}\in(0,n).</math>
*: <math>\mu\pm3\sigma=np\pm3\sqrt{np(1-p)}\in(0,n).</math>
: This 3-standard-deviation rule is equivalent to the following conditions, which also imply the first rule above. <math display="block">n>9 \left(\frac{1-p}{p} \right)\quad\text{and}\quad n>9\left(\frac{p}{1-p}\right).</math>
: This 3-standard-deviation rule is equivalent to the following conditions, which also imply the first rule above.
:: <math>n>9 \left(\frac{1-p}{p} \right)\quad\text{and}\quad n>9\left(\frac{p}{1-p}\right).</math>
{{hidden begin|style=width:66%|ta1=center|border=1px #aaa solid|title=[Proof]}}
{{hidden begin|style=width:66%|ta1=center|border=1px #aaa solid|title=[Proof]}}
The rule <math> np\pm3\sqrt{np(1-p)}\in(0,n)</math> is totally equivalent to request that
The rule <math> np\pm3\sqrt{np(1-p)}\in(0,n)</math> is totally equivalent to request that
: <math>np-3\sqrt{np(1-p)}>0\quad\text{and}\quad np+3\sqrt{np(1-p)}<n.</math>
<math display="block">np-3\sqrt{np(1-p)}>0\quad\text{and}\quad np+3\sqrt{np(1-p)}<n.</math>
Moving terms around yields:
Moving terms around yields:
: <math>np>3\sqrt{np(1-p)}\quad\text{and}\quad n(1-p)>3\sqrt{np(1-p)}.</math>
<math display="block">np>3\sqrt{np(1-p)}\quad\text{and}\quad n(1-p)>3\sqrt{np(1-p)}.</math>
Since <math>0<p<1</math>, we can apply the square power and divide by the respective factors <math>np^2</math> and <math>n(1-p)^2</math>, to obtain the desired conditions:
Since <math>0<p<1</math>, we can apply the square power and divide by the respective factors <math>np^2</math> and <math>n(1-p)^2</math>, to obtain the desired conditions:
: <math>n>9 \left(\frac{1-p}p\right) \quad\text{and}\quad n>9 \left(\frac{p}{1-p}\right).</math>
<math display="block">n>9 \left(\frac{1-p}p\right) \quad\text{and}\quad n>9 \left(\frac{p}{1-p}\right).</math>
Notice that these conditions automatically imply that <math>n>9</math>. On the other hand, apply again the square root and divide by 3,
Notice that these conditions automatically imply that <math>n>9</math>. On the other hand, apply again the square root and divide by 3,
: <math>\frac{\sqrt{n}}3>\sqrt{\frac{1-p}p}>0 \quad \text{and} \quad \frac{\sqrt{n}}3 > \sqrt{\frac{p}{1-p}}>0.</math>
<math display="block">\frac{\sqrt{n}}3>\sqrt{\frac{1-p}p}>0 \quad \text{and} \quad \frac{\sqrt{n}}3 > \sqrt{\frac{p}{1-p}}>0.</math>
Subtracting the second set of inequalities from the first one yields:
Subtracting the second set of inequalities from the first one yields:
: <math>\frac{\sqrt{n}}3>\sqrt{\frac{1-p}p}-\sqrt{\frac{p}{1-p}}>-\frac{\sqrt{n}}3;</math>
<math display="block">\frac{\sqrt{n}}3>\sqrt{\frac{1-p}p}-\sqrt{\frac{p}{1-p}}>-\frac{\sqrt{n}}3;</math>
and so, the desired first rule is satisfied,
and so, the desired first rule is satisfied,
: <math>\left|\sqrt{\frac{1-p}p}-\sqrt{\frac{p}{1-p}}\,\right|<\frac{\sqrt{n}}3.</math>
<math display="block">\left|\sqrt{\frac{1-p}p}-\sqrt{\frac{p}{1-p}}\,\right|<\frac{\sqrt{n}}3.</math>
{{hidden end}}
{{hidden end}}
* Another commonly used rule is that both values {{math|''np''}} and {{math|''n''(1 − ''p'')}} must be greater than<ref>{{Cite book |last=Chen |first=Zac |title=H2 Mathematics Handbook |publisher=Educational Publishing House |year=2011 |isbn=9789814288484 |edition=1 |location=Singapore |pages=350}}</ref><ref>{{Cite web |date=2023-05-29 |title=6.4: Normal Approximation to the Binomial Distribution - Statistics LibreTexts |url=https://stats.libretexts.org/Courses/Las_Positas_College/Math_40:_Statistics_and_Probability/06:_Continuous_Random_Variables_and_the_Normal_Distribution/6.04:_Normal_Approximation_to_the_Binomial_Distribution |access-date=2023-10-07 |archive-date=2023-05-29 |archive-url=https://web.archive.org/web/20230529211919/https://stats.libretexts.org/Courses/Las_Positas_College/Math_40:_Statistics_and_Probability/06:_Continuous_Random_Variables_and_the_Normal_Distribution/6.04:_Normal_Approximation_to_the_Binomial_Distribution |url-status=bot: unknown }}</ref> or equal to&nbsp;5. However, the specific number varies from source to source, and depends on how good an approximation one wants. In particular, if one uses&nbsp;9 instead of&nbsp;5, the rule implies the results stated in the previous paragraphs.
* Another commonly used rule is that both values {{math|''np''}} and {{math|''n''(1 − ''p'')}} must be greater than<ref>{{Cite book |last=Chen |first=Zac |title=H2 Mathematics Handbook |publisher=Educational Publishing House |year=2011 |isbn=9789814288484 |edition=1 |location=Singapore |pages=350}}</ref><ref>{{Cite web |date=2023-05-29 |title=6.4: Normal Approximation to the Binomial Distribution - Statistics LibreTexts |url=https://stats.libretexts.org/Courses/Las_Positas_College/Math_40:_Statistics_and_Probability/06:_Continuous_Random_Variables_and_the_Normal_Distribution/6.04:_Normal_Approximation_to_the_Binomial_Distribution |access-date=2023-10-07 |archive-date=2023-05-29 |archive-url=https://web.archive.org/web/20230529211919/https://stats.libretexts.org/Courses/Las_Positas_College/Math_40:_Statistics_and_Probability/06:_Continuous_Random_Variables_and_the_Normal_Distribution/6.04:_Normal_Approximation_to_the_Binomial_Distribution |url-status=bot: unknown }}</ref> or equal to&nbsp;5. However, the specific number varies from source to source, and depends on how good an approximation one wants. In particular, if one uses&nbsp;9 instead of&nbsp;5, the rule implies the results stated in the previous paragraphs.
{{hidden begin|style=width:66%|ta1=center|border=1px #aaa solid|title=[Proof]}}
{{hidden begin|style=width:66%|ta1=center|border=1px #aaa solid|title=[Proof]}}
Assume that both values <math>np</math> and <math>n(1-p)</math> are greater than&nbsp;9. Since <math>0< p<1</math>, we easily have that  
Assume that both values <math>np</math> and <math>n(1-p)</math> are greater than&nbsp;9. Since <math>0< p<1</math>, we easily have that  
: <math>np\geq9>9(1-p)\quad\text{and}\quad n(1-p)\geq9>9p.</math>
<math display="block">np\geq9>9(1-p)\quad\text{and}\quad n(1-p)\geq9>9p.</math>
We only have to divide now by the respective factors <math>p</math> and <math>1-p</math>, to deduce the alternative form of the 3-standard-deviation rule:
We only have to divide now by the respective factors <math>p</math> and <math>1-p</math>, to deduce the alternative form of the 3-standard-deviation rule:
: <math>n>9 \left(\frac{1-p}p\right) \quad\text{and}\quad n>9 \left(\frac{p}{1-p}\right).</math>
<math display="block">n>9 \left(\frac{1-p}p\right) \quad\text{and}\quad n>9 \left(\frac{p}{1-p}\right).</math>
{{hidden end}}
{{hidden end}}


The following is an example of applying a [[continuity correction]]. Suppose one wishes to calculate {{math|Pr(''X'' ≤ 8)}} for a binomial random variable {{math|''X''}}. If {{math|''Y''}} has a distribution given by the normal approximation, then {{math|Pr(''X'' ≤ 8)}} is approximated by {{math|Pr(''Y'' ≤ 8.5)}}. The addition of 0.5 is the continuity correction; the uncorrected normal approximation gives considerably less accurate results.
The following is an example of applying a [[continuity correction]]. Suppose one wishes to calculate {{math|Pr(''X'' ≤ 8)}} for a binomial random variable {{mvar|X}}. If {{mvar|Y}} has a distribution given by the normal approximation, then {{math|Pr(''X'' ≤ 8)}} is approximated by {{math|Pr(''Y'' ≤ 8.5)}}. The addition of 0.5 is the continuity correction; the uncorrected normal approximation gives considerably less accurate results.


This approximation, known as [[de Moivre–Laplace theorem]], is a huge time-saver when undertaking calculations by hand (exact calculations with large {{math|''n''}} are very onerous); historically, it was the first use of the normal distribution, introduced in [[Abraham de Moivre]]'s book ''[[The Doctrine of Chances]]'' in 1738. Nowadays, it can be seen as a consequence of the [[central limit theorem]] since {{math|B(''n'', ''p'')}} is a sum of {{math|''n''}} independent, identically distributed [[Bernoulli distribution|Bernoulli variables]] with parameter&nbsp;{{math|''p''}}. This fact is the basis of a [[hypothesis test]], a "proportion z-test", for the value of {{math|''p''}} using {{math|''x''/''n''}}, the sample proportion and estimator of {{math|''p''}}, in a [[common test statistics|common test statistic]].<ref>[[NIST]]/[[SEMATECH]], [http://www.itl.nist.gov/div898/handbook/prc/section2/prc24.htm "7.2.4. Does the proportion of defectives meet requirements?"] ''e-Handbook of Statistical Methods.''</ref>
This approximation, known as [[de Moivre–Laplace theorem]], is a huge time-saver when undertaking calculations by hand (exact calculations with large {{mvar|n}} are very onerous); historically, it was the first use of the normal distribution, introduced in [[Abraham de Moivre]]'s book ''[[The Doctrine of Chances]]'' in 1738. Nowadays, it can be seen as a consequence of the [[central limit theorem]] since {{math|B(''n'', ''p'')}} is a sum of {{mvar|n}} independent, identically distributed [[Bernoulli distribution|Bernoulli variables]] with parameter&nbsp;{{mvar|p}}. This fact is the basis of a [[hypothesis test]], a "proportion z-test", for the value of {{mvar|p}} using {{math|''x'' / ''n''}}, the sample proportion and estimator of {{mvar|p}}, in a [[common test statistics|common test statistic]].<ref>[[NIST]]/[[SEMATECH]], [http://www.itl.nist.gov/div898/handbook/prc/section2/prc24.htm "7.2.4. Does the proportion of defectives meet requirements?"] ''e-Handbook of Statistical Methods.''</ref>


For example, suppose one randomly samples {{math|''n''}} people out of a large population and ask them whether they agree with a certain statement. The proportion of people who agree will of course depend on the sample. If groups of ''n'' people were sampled repeatedly and truly randomly, the proportions would follow an approximate normal distribution with mean equal to the true proportion ''p'' of agreement in the population and with standard deviation
For example, suppose one randomly samples {{mvar|n}} people out of a large population and ask them whether they agree with a certain statement. The proportion of people who agree will of course depend on the sample. If groups of {{mvar|n}} people were sampled repeatedly and truly randomly, the proportions would follow an approximate normal distribution with mean equal to the true proportion {{mvar|p}} of agreement in the population and with standard deviation
: <math>\sigma = \sqrt{\frac{p(1-p)}{n}}</math>
<math display="block">\sigma = \sqrt{\frac{p(1-p)}{n}}</math>


=== Poisson approximation ===
=== Poisson approximation ===


The binomial distribution converges towards the [[Poisson distribution]] as the number of trials goes to infinity while the product {{math|''np''}} converges to a finite limit. Therefore, the Poisson distribution with parameter {{math|1=''λ'' = ''np''}} can be used as an approximation to {{math|B(''n'', ''p'')}} of the binomial distribution if {{math|''n''}} is sufficiently large and {{math|''p''}} is sufficiently small.  According to rules of thumb, this approximation is good if {{math|''n'' ≥ 20}} and {{math|''p'' ≤ 0.05}}<ref>{{cite news |date=2023-03-28 |title=12.4 – Approximating the Binomial Distribution {{!}} STAT 414 |newspaper=Pennstate: Statistics Online Courses |url=https://online.stat.psu.edu/stat414/lesson/12/12.4 |access-date=2023-10-08 |archive-date=2023-03-28 |archive-url=https://web.archive.org/web/20230328081322/https://online.stat.psu.edu/stat414/lesson/12/12.4 |url-status=bot: unknown }}</ref> such that {{math|''np'' ≤ 1}}, or if {{math|''n'' > 50}} and {{math|''p'' < 0.1}} such that {{math|''np'' < 5}},<ref>{{Cite book |last=Chen |first=Zac |title=H2 mathematics handbook |publisher=Educational publishing house |year=2011 |isbn=9789814288484 |edition=1 |location=Singapore |pages=348}}</ref> or if {{math|''n'' ≥ 100}} and {{math|''np'' ≤ 10}}.<ref name="nist">[[NIST]]/[[SEMATECH]], [http://www.itl.nist.gov/div898/handbook/pmc/section3/pmc331.htm "6.3.3.1. Counts Control Charts"], ''e-Handbook of Statistical Methods.''</ref><ref>{{Cite web |date=2023-03-13 |title=The Connection Between the Poisson and Binomial Distributions |url=https://mathcenter.oxford.emory.edu/site/math117/connectingPoissonAndBinomial/ |access-date=2023-10-08 |archive-date=2023-03-13 |archive-url=https://web.archive.org/web/20230313085931/https://mathcenter.oxford.emory.edu/site/math117/connectingPoissonAndBinomial/ |url-status=bot: unknown }}</ref>
The binomial distribution converges towards the [[Poisson distribution]] as the number of trials goes to infinity while the product {{math|''np''}} converges to a finite limit. Therefore, the Poisson distribution with parameter {{math|1=''λ'' = ''np''}} can be used as an approximation to {{math|B(''n'', ''p'')}} of the binomial distribution if {{mvar|n}} is sufficiently large and {{mvar|p}} is sufficiently small.  According to rules of thumb, this approximation is good if {{math|''n'' ≥ 20}} and {{math|''p'' ≤ 0.05}}<ref>{{cite news |date=2023-03-28 |title=12.4 – Approximating the Binomial Distribution {{!}} STAT 414 |newspaper=Pennstate: Statistics Online Courses |url=https://online.stat.psu.edu/stat414/lesson/12/12.4 |access-date=2023-10-08 |archive-date=2023-03-28 |archive-url=https://web.archive.org/web/20230328081322/https://online.stat.psu.edu/stat414/lesson/12/12.4 |url-status=bot: unknown }}</ref> such that {{math|''np'' ≤ 1}}, or if {{math|''n'' > 50}} and {{math|''p'' < 0.1}} such that {{math|''np'' < 5}},<ref>{{Cite book |last=Chen |first=Zac |title=H2 mathematics handbook |publisher=Educational publishing house |year=2011 |isbn=9789814288484 |edition=1 |location=Singapore |pages=348}}</ref> or if {{math|''n'' ≥ 100}} and {{math|''np'' ≤ 10}}.<ref name="nist">[[NIST]]/[[SEMATECH]], [http://www.itl.nist.gov/div898/handbook/pmc/section3/pmc331.htm "6.3.3.1. Counts Control Charts"], ''e-Handbook of Statistical Methods.''</ref><ref>{{Cite web |date=2023-03-13 |title=The Connection Between the Poisson and Binomial Distributions |url=https://mathcenter.oxford.emory.edu/site/math117/connectingPoissonAndBinomial/ |access-date=2023-10-08 |archive-date=2023-03-13 |archive-url=https://web.archive.org/web/20230313085931/https://mathcenter.oxford.emory.edu/site/math117/connectingPoissonAndBinomial/ |url-status=bot: unknown }}</ref>


Concerning the accuracy of Poisson approximation, see Novak,<ref>Novak S.Y. (2011) Extreme value methods with applications to finance. London: CRC/ Chapman & Hall/Taylor & Francis. {{ISBN|9781-43983-5746}}.</ref> ch. 4, and references therein.
Concerning the accuracy of Poisson approximation, see Novak,<ref>Novak S.Y. (2011) Extreme value methods with applications to finance. London: CRC/ Chapman & Hall/Taylor & Francis. {{ISBN|9781-43983-5746}}.</ref> ch. 4, and references therein.
Line 441: Line 439:
=== Limiting distributions ===
=== Limiting distributions ===


* ''[[Poisson limit theorem]]'': As {{math|''n''}} approaches {{math|∞}} and {{math|''p''}} approaches 0 with the product {{math|''np''}} held fixed, the {{math|Binomial(''n'', ''p'')}} distribution approaches the [[Poisson distribution]] with [[expected value]] {{math|1=''λ'' = ''np''}}.<ref name="nist"/>
* ''[[Poisson limit theorem]]'': As {{mvar|n}} approaches {{math|∞}} and {{mvar|p}} approaches 0 with the product {{math|''np''}} held fixed, the {{math|Binomial(''n'', ''p'')}} distribution approaches the [[Poisson distribution]] with [[expected value]] {{math|1=''λ'' = ''np''}}.<ref name="nist"/>
* ''[[de Moivre–Laplace theorem]]'': As {{math|''n''}} approaches {{math|∞}} while {{math|''p''}} remains fixed, the distribution of
* ''[[de Moivre–Laplace theorem]]'': As {{mvar|n}} approaches {{math|∞}} while {{mvar|p}} remains fixed, the distribution of <math display="block">\frac{X-np}{\sqrt{np(1-p)}}</math> approaches the [[normal distribution]] with expected value&nbsp;0 and [[variance]]&nbsp;1. This result is sometimes loosely stated by saying that the distribution of {{mvar|X}} is [[Asymptotic normality|asymptotically normal]] with expected value&nbsp;0 and [[variance]]&nbsp;1. This result is a specific case of the [[central limit theorem]].
*: <math>\frac{X-np}{\sqrt{np(1-p)}}</math>
: approaches the [[normal distribution]] with expected value&nbsp;0 and [[variance]]&nbsp;1. This result is sometimes loosely stated by saying that the distribution of {{math|''X''}} is [[Asymptotic normality|asymptotically normal]] with expected value&nbsp;0 and [[variance]]&nbsp;1. This result is a specific case of the [[central limit theorem]].


=== Beta distribution ===
=== Beta distribution ===


The binomial distribution and beta distribution are different views of the same model of repeated Bernoulli trials. The binomial distribution is the [[Probability mass function|PMF]] of {{mvar|k}} successes given {{mvar|n}} independent events each with a probability {{mvar|p}} of success.  
The binomial distribution and beta distribution are different views of the same model of repeated Bernoulli trials. The binomial distribution is the [[Probability mass function|PMF]] of {{mvar|k}} successes given {{mvar|n}} independent events each with a probability {{mvar|p}} of success.  
Mathematically, when {{math|1=''α'' = ''k'' + 1}} and {{math|1=''β'' = ''n'' &minus; ''k'' + 1}}, the beta distribution and the binomial distribution are related by{{clarification needed|date=March 2023|
Mathematically, when {{math|1=''α'' = ''k'' + 1}} and {{math|1=''β'' = ''n'' ''k'' + 1}}, the beta distribution and the binomial distribution are related by{{clarification needed|date=March 2023|
reason=Is the left hand side referring to a probability density, and the right hand side to a probability mass function? Clearly a beta distributed random variable can not be a scalar multiple of a binomial random variable given that the former is continuous and the latter discrete. In any case, it would seem to be more correct to say that this relationship means that the PDF of one is related to the PMF of the other, rather than appearing to say that the _distributions_ (often interchangeable with their CDFs) are directly related to one another.
reason=Is the left hand side referring to a probability density, and the right hand side to a probability mass function? Clearly a beta distributed random variable can not be a scalar multiple of a binomial random variable given that the former is continuous and the latter discrete. In any case, it would seem to be more correct to say that this relationship means that the PDF of one is related to the PMF of the other, rather than appearing to say that the _distributions_ (often interchangeable with their CDFs) are directly related to one another.
}} a factor of {{math|''n'' + 1}}:
}} a factor of {{math|''n'' + 1}}:
: <math>\operatorname{Beta}(p;\alpha;\beta) = (n+1)B(k;n;p)</math>
<math display="block">\operatorname{Beta}(p;\alpha;\beta) = (n+1)\mathrm{B}(k;n;p)</math>


[[Beta distribution]]s also provide a family of [[prior distribution|prior probability distribution]]s for binomial distributions in [[Bayesian inference]]:<ref name=MacKay>{{cite book| last=MacKay| first=David| title = Information Theory, Inference and Learning Algorithms|year=2003| publisher=Cambridge University Press; First Edition |isbn=978-0521642989}}</ref>  
[[Beta distribution]]s also provide a family of [[prior distribution|prior probability distribution]]s for binomial distributions in [[Bayesian inference]]:<ref name="MacKay">{{cite book |last=MacKay |first=David J. C. |author-link=David J. C. MacKay |title=Information Theory, Inference and Learning Algorithms |publisher=[[Cambridge University Press]] |year=2003 |isbn=978-0521642989 |edition=1st}}</ref>  
: <math>P(p;\alpha,\beta) = \frac{p^{\alpha-1}(1-p)^{\beta-1}}{\operatorname{Beta}(\alpha,\beta)}.</math>
<math display="block">P(p;\alpha,\beta) = \frac{p^{\alpha-1}(1-p)^{\beta-1}}{\operatorname{Beta}(\alpha,\beta)}.</math>
Given a uniform prior, the posterior distribution for the probability of success {{mvar|p}} given {{mvar|n}} independent events with {{mvar|k}} observed successes is a beta distribution.<ref>{{Cite web|url=https://www.statlect.com/probability-distributions/beta-distribution|title = Beta distribution}}</ref>
Given a uniform prior, the posterior distribution for the probability of success {{mvar|p}} given {{mvar|n}} independent events with {{mvar|k}} observed successes is a beta distribution.<ref>{{Cite web|url=https://www.statlect.com/probability-distributions/beta-distribution|title = Beta distribution}}</ref>


Line 477: Line 473:
== History ==
== History ==


This distribution was derived by [[Jacob Bernoulli]]. He considered the case where {{math|1=''p'' = ''r''/(''r'' + ''s'')}} where {{math|''p''}} is the probability of success and {{math|''r''}} and {{math|''s''}} are positive integers. [[Blaise Pascal]] had earlier considered the case where {{math|1=''p'' = 1/2}}, tabulating the corresponding binomial coefficients in what is now recognized as [[Pascal's triangle]].<ref name=":1">{{cite book|last=Katz|first=Victor|title=A History of Mathematics: An Introduction|publisher=Addison-Wesley|year=2009|isbn=978-0-321-38700-4|pages=491|chapter=14.3: Elementary Probability}}</ref>
This distribution was derived by [[Jacob Bernoulli]]. He considered the case where {{math|1=''p'' = ''r''/(''r'' + ''s'')}} where {{mvar|p}} is the probability of success and {{mvar|r}} and {{mvar|s}} are positive integers. [[Blaise Pascal]] had earlier considered the case where {{math|1=''p'' = 1/2}}, tabulating the corresponding binomial coefficients in what is now recognized as [[Pascal's triangle]].<ref name=":1">{{cite book|last=Katz|first=Victor|title=A History of Mathematics: An Introduction|publisher=Addison-Wesley|year=2009|isbn=978-0-321-38700-4|pages=491|chapter=14.3: Elementary Probability}}</ref>


== See also ==
== See also ==
Line 488: Line 484:
* [[Statistical mechanics]]
* [[Statistical mechanics]]
* [[Piling-up lemma]], the resulting probability when [[XOR]]-ing independent Boolean variables
* [[Piling-up lemma]], the resulting probability when [[XOR]]-ing independent Boolean variables
== Notes ==
{{reflist|group=note}}


== References ==
== References ==
{{reflist|colwidth=30em}}
{{reflist|colwidth=30em}}
{{reflist|group=note}}


== Further reading ==
== Further reading ==

Latest revision as of 11:28, 29 October 2025

Template:Short description Script error: No such module "redirect hatnote". Template:Probability distribution Template:Probability fundamentals

File:Pascal's triangle; binomial distribution.svg
Binomial distribution for Template:Math
with Template:Mvar and Template:Mvar as in Pascal's triangle

The probability that a ball in a Galton box with 8 layers (Template:Math) ends up in the central bin (Template:Math) is Template:Math.

In probability theory and statistics, the binomial distribution with parameters Template:Mvar and Template:Mvar is the discrete probability distribution of the number of successes in a sequence of Template:Mvar independent experiments, each asking a yes–no question, and each with its own Boolean-valued outcome: success (with probability Template:Mvar) or failure (with probability Template:Math). A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment, and a sequence of outcomes is called a Bernoulli process. For a single trial, that is, when Template:Math, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the binomial test of statistical significance.[1]

The binomial distribution is frequently used to model the number of successes in a sample of size Template:Mvar drawn with replacement from a population of size Template:Mvar. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for Template:Mvar much larger than Template:Mvar, the binomial distribution remains a good approximation, and is widely used.

Definitions

Probability mass function

If the random variable Template:Mvar follows the binomial distribution with parameters n (a natural number) and Template:Math, we write Template:Math. The probability of getting exactly Template:Mvar successes in Template:Mvar independent Bernoulli trials (with the same rate Template:Mvar) is given by the probability mass function: f(k,n,p)=Pr(X=k)=(nk)pk(1p)nk for Template:Math, where (nk)=n!k!(nk)! is the binomial coefficient. The formula can be understood as follows: Template:Math is the probability of obtaining the sequence of Template:Mvar independent Bernoulli trials in which Template:Mvar trials are "successes" and the remaining Template:Math trials are "failures". Since the trials are independent with probabilities remaining constant between them, any sequence of Template:Mvar trials with Template:Mvar successes (and Template:Math failures) has the same probability of being achieved (regardless of positions of successes within the sequence). There are (nk) such sequences, since the binomial coefficient (nk) counts the number of ways to choose the positions of the Template:Mvar successes among the Template:Mvar trials. The binomial distribution is concerned with the probability of obtaining any of these sequences, meaning the probability of obtaining one of them (Template:Math) must be added (nk) times, hence Pr(X=k)=(nk)pk(1p)nk.

In creating reference tables for binomial distribution probability, usually, the table is filled in up to Template:Math values. This is because for Template:Math, the probability can be calculated by its complement as f(k,n,p)=f(nk,n,1p).

Looking at the expression Template:Math as a function of Template:Mvar, there is a Template:Mvar value that maximizes it. This Template:Mvar value can be found by calculating f(k+1,n,p)f(k,n,p)=(nk)p(k+1)(1p) and comparing it to 1. There is always an integer Template:Mvar that satisfies[2] (n+1)p1M<(n+1)p.

Template:Math is monotone increasing for Template:Math and monotone decreasing for Template:Math, with the exception of the case where Template:Math is an integer. In this case, there are two values for which Template:Mvar is maximal: Template:Math and Template:Math. Template:Mvar is the most probable outcome (that is, the most likely, although this can still be unlikely overall) of the Bernoulli trials and is called the mode.

Equivalently, Template:Math. Taking the floor function, we obtain Template:Math.Template:NoteTag

Example

Suppose a biased coin comes up heads with probability 0.3 when tossed. The probability of seeing exactly 4 heads in 6 tosses is f(4,6,0.3)=(64)0.34(10.3)64=0.059535.

Cumulative distribution function

The cumulative distribution function can be expressed as: F(k;n,p)=Pr(Xk)=i=0k(ni)pi(1p)ni, where k is the "floor" under Template:Mvar; that is, the greatest integer less than or equal to Template:Mvar.

It can also be represented in terms of the regularized incomplete beta function, as follows:[3] F(k;n,p)=Pr(Xk)=I1p(nk,k+1)=(nk)(nk)01ptnk1(1t)kdt, which is equivalent to the cumulative distribution functions of the beta distribution and of the [[F-distribution|Template:Mvar-distribution]]:[4] F(k;n,p)=Fbeta-distribution(x=1p;α=nk,β=k+1) F(k;n,p)=FF-distribution(x=1ppk+1nk;d1=2(nk),d2=2(k+1)).

Some closed-form bounds for the cumulative distribution function are given below.

Properties

Expected value and variance

If Template:Math, that is, Template:Mvar is a binomially distributed random variable, Template:Mvar being the total number of experiments and Template:Mvar the probability of each experiment yielding a successful result, then the expected value of Template:Mvar is:[5] E[X]=np.

This follows from the linearity of the expected value along with the fact that Template:Mvar is the sum of Template:Mvar identical Bernoulli random variables, each with expected value Template:Mvar. In other words, if X1,,Xn are identical (and independent) Bernoulli random variables with parameter Template:Mvar, then Template:Math and E[X]=E[X1++Xn]=E[X1]++E[Xn]=p++p=np.

The variance is: Var(X)=npq=np(1p).

This similarly follows from the fact that the variance of a sum of independent random variables is the sum of the variances.

Higher moments

The first 6 central moments, defined as μc=E[(XE[X])c], are given by μ1=0,μ2=np(1p),μ3=np(1p)(12p),μ4=np(1p)[1+(3n6)p(1p)],μ5=np(1p)(12p)[1+(10n12)p(1p)],μ6=np(1p)[130p(1p)[14p(1p)]+5np(1p)[526p(1p)]+15n2p2(1p)2].

The non-central moments satisfy E[X]=np,E[X2]=np(1p)+n2p2, and in general[6][7] E[Xc]=k=0c{ck}nk_pk, where {ck} are the Stirling numbers of the second kind, and nk_=n(n1)(nk+1) is the k-th falling power of n. A simple bound [8] follows by bounding the Binomial moments via the higher Poisson moments: E[Xc][cln(1+cnp)]c(np)cexp(c22np). This shows that if c=O(np), then E[Xc] is at most a constant factor away from E[X]c.

The moment-generating function is MX(t)=𝔼[etX]=(1p+pet)n.

Mode

Usually the mode of a binomial Template:Math distribution is equal to (n+1)p, where is the floor function. However, when Template:Math is an integer and Template:Mvar is neither 0 nor 1, then the distribution has two modes: Template:Math and Template:Math. When Template:Mvar is equal to 0 or 1, the mode will be 0 and Template:Mvar correspondingly. These cases can be summarized as follows: mode={(n+1)pif (n+1)p is 0 or a noninteger,(n+1)p  and  (n+1)p1if (n+1)p{1,,n},nif (n+1)p=n+1.

Proof: Let f(k)=(nk)pkqnk.

For p=0 only f(0) has a nonzero value with f(0)=1. For p=1 we find f(n)=1 and f(k)=0 for kn. This proves that the mode is 0 for p=0 and n for p=1.

Let 0<p<1. We find f(k+1)f(k)=(nk)p(k+1)(1p).

From this follows k>(n+1)p1f(k+1)<f(k)k=(n+1)p1f(k+1)=f(k)k<(n+1)p1f(k+1)>f(k)

So when (n+1)p1 is an integer, then (n+1)p1 and (n+1)p is a mode. In the case that (n+1)p1, then only (n+1)p1+1=(n+1)p is a mode.[9]

Median

In general, there is no single formula to find the median for a binomial distribution, and it may even be non-unique. However, several special results have been established:

Tail bounds

For Template:Math, upper bounds can be derived for the lower tail of the cumulative distribution function F(k;n,p)=Pr(Xk), the probability that there are at most Template:Mvar successes. Since Pr(Xk)=F(nk;n,1p), these bounds can also be seen as bounds for the upper tail of the cumulative distribution function for Template:Math.

Hoeffding's inequality yields the simple bound F(k;n,p)exp(2n(pkn)2), which is however not very tight. In particular, for Template:Math, we have that Template:Math (for fixed Template:Mvar, Template:Mvar with Template:Math), but Hoeffding's bound evaluates to a positive constant.

A sharper bound can be obtained from the Chernoff bound:[15] F(k;n,p)exp(nD(knp)) where Template:Math is the relative entropy (or Kullback-Leibler divergence) between an Template:Mvar-coin and a Template:Mvar-coin (that is, between the Template:Math and Template:Math distribution): D(ap)=(a)lnap+(1a)ln1a1p.

Asymptotically, this bound is reasonably tight; see [15] for details.

One can also obtain lower bounds on the tail Template:Math, known as anti-concentration bounds. By approximating the binomial coefficient with Stirling's formula it can be shown that[16] F(k;n,p)18nkn(1kn)exp(nD(knp)), which implies the simpler but looser bound F(k;n,p)12nexp(nD(knp)).

For Template:Math and Template:Math for even Template:Mvar, it is possible to make the denominator constant:[17] F(k;n,12)115exp(16n(12kn)2).

Statistical inference

Estimation of parameters

Script error: No such module "Labelled list hatnote".

When Template:Mvar is known, the parameter Template:Mvar can be estimated using the proportion of successes: p^=xn. This estimator is found using maximum likelihood estimator and also the method of moments. This estimator is unbiased and uniformly with minimum variance, proven using Lehmann–Scheffé theorem, since it is based on a minimal sufficient and complete statistic (that is, Template:Mvar). It is also consistent both in probability and in MSE. This statistic is asymptotically normal thanks to the central limit theorem, because it is the same as taking the mean over Bernoulli samples. It has a variance of Var(p^)=p(1p)n, a property which is used in various ways, such as in Wald's confidence intervals.

A closed form Bayes estimator for Template:Mvar also exists when using the Beta distribution as a conjugate prior distribution. When using a general Beta(α,β) as a prior, the posterior mean estimator is: p^b=x+αn+α+β. The Bayes estimator is asymptotically efficient and as the sample size approaches infinity (Template:Math), it approaches the MLE solution.[18] The Bayes estimator is biased (how much depends on the priors), admissible and consistent in probability. Using the Bayesian estimator with the Beta distribution can be used with Thompson sampling.

For the special case of using the standard uniform distribution as a non-informative prior, Beta(α=1,β=1)=U(0,1), the posterior mean estimator becomes: p^b=x+1n+2. (A posterior mode should just lead to the standard estimator.) This method is called the rule of succession, which was introduced in the 18th century by Pierre-Simon Laplace.

When relying on Jeffreys prior, the prior is Beta(α=12,β=12),[19] which leads to the estimator: p^Jeffreys=x+12n+1.

When estimating Template:Mvar with very rare events and a small Template:Mvar (for example, if Template:Math), then using the standard estimator leads to p^=0, which sometimes is unrealistic and undesirable. In such cases there are various alternative estimators.[20] One way is to use the Bayes estimator p^b, leading to: p^b=1n+2. Another method is to use the upper bound of the confidence interval obtained using the rule of three: p^rule of 3=3n.

Confidence intervals for the parameter p

Script error: No such module "Labelled list hatnote". Script error: No such module "Labelled list hatnote".

Even for quite large values of Template:Mvar, the actual distribution of the mean is significantly nonnormal.[21] Because of this problem several methods to estimate confidence intervals have been proposed.

In the equations for confidence intervals below, the variables have the following meaning:

Wald method

Script error: No such module "Labelled list hatnote". p^±zp^(1p^)n.

A continuity correction of Template:Math may be added.Template:Clarify

Agresti–Coull method

Script error: No such module "Labelled list hatnote". [22] p~±zp~(1p~)n+z2

Here the estimate of Template:Mvar is modified to p~=n1+12z2n+z2

This method works well for Template:Math and Template:Math.[23] See here for n10.[24] For Template:Math use the Wilson (score) method below.

Arcsine method

Script error: No such module "Labelled list hatnote". [25] sin2(arcsin(p^)±z2n).

Wilson (score) method

Script error: No such module "Labelled list hatnote".

The notation in the formula below differs from the previous formulas in two respects:[26]

  • Firstly, Template:Math has a slightly different interpretation in the formula below: it has its ordinary meaning of 'the Template:Mvarth quantile of the standard normal distribution', rather than being a shorthand for 'the Template:Mathth quantile'.
  • Secondly, this formula does not use a plus-minus to define the two bounds. Instead, one may use z=zα/2 to get the lower bound, or use z=z1α/2 to get the upper bound. For example: for a 95% confidence level the error α=0.05, so one gets the lower bound by using z=zα/2=z0.025=1.96, and one gets the upper bound by using z=z1α/2=z0.975=1.96.

p^+z22n+zp^(1p^)n+z24n21+z2n[27]

Comparison

The so-called "exact" (Clopper–Pearson) method is the most conservative.[21] (Exact does not mean perfectly accurate; rather, it indicates that the estimates will not be less conservative than the true value.)

The Wald method, although commonly recommended in textbooks, is the most biased.Template:Clarify

Related distributions

Sums of binomials

If Template:Math and Template:Math are independent binomial variables with the same probability Template:Mvar, then Template:Math is again a binomial variable; its distribution is Template:Math:[28] P(Z=k)=i=0k[(ni)pi(1p)ni][(mki)pki(1p)mk+i]=(n+mk)pk(1p)n+mk

A Binomial distributed random variable Template:Math can be considered as the sum of Template:Mvar Bernoulli distributed random variables. So the sum of two Binomial distributed random variables Template:Math and Template:Math is equivalent to the sum of Template:Math Bernoulli distributed random variables, which means Template:Math. This can also be proven directly using the addition rule.

However, if Template:Mvar and Template:Mvar do not have the same probability Template:Mvar, then the variance of the sum will be smaller than the variance of a binomial variable distributed as Template:Math.

Poisson binomial distribution

The binomial distribution is a special case of the Poisson binomial distribution, which is the distribution of a sum of Template:Mvar independent non-identical Bernoulli trials Template:Math.[29]

Ratio of two binomial distributions

This result was first derived by Katz and coauthors in 1978.[30]

Let Template:Math and Template:Math be independent. Let Template:Math.

Then Template:Math is approximately normally distributed with mean Template:Math and variance Template:Math.

Conditional binomials

If Template:Math and Template:Math (the conditional distribution of Template:Mvar, given Template:Mvar), then Template:Mvar is a simple binomial random variable with distribution Template:Math.

For example, imagine throwing Template:Mvar balls to a basket Template:Math and taking the balls that hit and throwing them to another basket Template:Math. If Template:Mvar is the probability to hit Template:Math then Template:Math is the number of balls that hit Template:Math. If Template:Mvar is the probability to hit Template:Math then the number of balls that hit Template:Math is Template:Math and therefore Template:Math.

<templatestyles src="Template:Hidden begin/styles.css"/>

[Proof]

Since XB(n,p) and YB(X,q), by the law of total probability, Pr[Y=m]=k=mnPr[Y=mX=k]Pr[X=k]=k=mn(nk)(km)pkqm(1p)nk(1q)km Since (nk)(km)=(nm)(nmkm), the equation above can be expressed as Pr[Y=m]=k=mn(nm)(nmkm)pkqm(1p)nk(1q)km Factoring pk=pmpkm and pulling all the terms that don't depend on k out of the sum now yields Pr[Y=m]=(nm)pmqm(k=mn(nmkm)pkm(1p)nk(1q)km)=(nm)(pq)m(k=mn(nmkm)(p(1q))km(1p)nk) After substituting i=km in the expression above, we get Pr[Y=m]=(nm)(pq)m(i=0nm(nmi)(ppq)i(1p)nmi) Notice that the sum (in the parentheses) above equals (ppq+1p)nm by the binomial theorem. Substituting this in finally yields Pr[Y=m]=(nm)(pq)m(ppq+1p)nm=(nm)(pq)m(1pq)nm and thus YB(n,pq) as desired.

Bernoulli distribution

The Bernoulli distribution is a special case of the binomial distribution, where Template:Math. Symbolically, Template:Math has the same meaning as Template:Math. Conversely, any binomial distribution, Template:Math, is the distribution of the sum of Template:Mvar independent Bernoulli trials, Template:Math, each with the same probability Template:Mvar.[31]

Normal approximation

Script error: No such module "Labelled list hatnote".

File:Binomial Distribution.svg
Binomial probability mass function and normal probability density function approximation for Template:Math and Template:Math

If Template:Mvar is large enough, then the skew of the distribution is not too great. In this case a reasonable approximation to Template:Math is given by the normal distribution 𝒩(np,np(1p)), and this basic approximation can be improved in a simple way by using a suitable continuity correction. The basic approximation generally improves as Template:Mvar increases (at least 20) and is better when Template:Mvar is not near to 0 or 1.[32] Various rules of thumb may be used to decide whether Template:Mvar is large enough, and Template:Mvar is far enough from the extremes of zero or one:

  • One rule[32] is that for Template:Math the normal approximation is adequate if the absolute value of the skewness is strictly less than 0.3; that is, if |12p|np(1p)=1n|1ppp1p|<0.3.

This can be made precise using the Berry–Esseen theorem.

  • A stronger rule states that the normal approximation is appropriate only if everything within 3 standard deviations of its mean is within the range of possible values; that is, only if μ±3σ=np±3np(1p)(0,n).
This 3-standard-deviation rule is equivalent to the following conditions, which also imply the first rule above. n>9(1pp)andn>9(p1p).

<templatestyles src="Template:Hidden begin/styles.css"/>

[Proof]

The rule np±3np(1p)(0,n) is totally equivalent to request that np3np(1p)>0andnp+3np(1p)<n. Moving terms around yields: np>3np(1p)andn(1p)>3np(1p). Since 0<p<1, we can apply the square power and divide by the respective factors np2 and n(1p)2, to obtain the desired conditions: n>9(1pp)andn>9(p1p). Notice that these conditions automatically imply that n>9. On the other hand, apply again the square root and divide by 3, n3>1pp>0andn3>p1p>0. Subtracting the second set of inequalities from the first one yields: n3>1ppp1p>n3; and so, the desired first rule is satisfied, |1ppp1p|<n3.

  • Another commonly used rule is that both values Template:Math and Template:Math must be greater than[33][34] or equal to 5. However, the specific number varies from source to source, and depends on how good an approximation one wants. In particular, if one uses 9 instead of 5, the rule implies the results stated in the previous paragraphs.

<templatestyles src="Template:Hidden begin/styles.css"/>

[Proof]

Assume that both values np and n(1p) are greater than 9. Since 0<p<1, we easily have that np9>9(1p)andn(1p)9>9p. We only have to divide now by the respective factors p and 1p, to deduce the alternative form of the 3-standard-deviation rule: n>9(1pp)andn>9(p1p).

The following is an example of applying a continuity correction. Suppose one wishes to calculate Template:Math for a binomial random variable Template:Mvar. If Template:Mvar has a distribution given by the normal approximation, then Template:Math is approximated by Template:Math. The addition of 0.5 is the continuity correction; the uncorrected normal approximation gives considerably less accurate results.

This approximation, known as de Moivre–Laplace theorem, is a huge time-saver when undertaking calculations by hand (exact calculations with large Template:Mvar are very onerous); historically, it was the first use of the normal distribution, introduced in Abraham de Moivre's book The Doctrine of Chances in 1738. Nowadays, it can be seen as a consequence of the central limit theorem since Template:Math is a sum of Template:Mvar independent, identically distributed Bernoulli variables with parameter Template:Mvar. This fact is the basis of a hypothesis test, a "proportion z-test", for the value of Template:Mvar using Template:Math, the sample proportion and estimator of Template:Mvar, in a common test statistic.[35]

For example, suppose one randomly samples Template:Mvar people out of a large population and ask them whether they agree with a certain statement. The proportion of people who agree will of course depend on the sample. If groups of Template:Mvar people were sampled repeatedly and truly randomly, the proportions would follow an approximate normal distribution with mean equal to the true proportion Template:Mvar of agreement in the population and with standard deviation σ=p(1p)n

Poisson approximation

The binomial distribution converges towards the Poisson distribution as the number of trials goes to infinity while the product Template:Math converges to a finite limit. Therefore, the Poisson distribution with parameter Template:Math can be used as an approximation to Template:Math of the binomial distribution if Template:Mvar is sufficiently large and Template:Mvar is sufficiently small. According to rules of thumb, this approximation is good if Template:Math and Template:Math[36] such that Template:Math, or if Template:Math and Template:Math such that Template:Math,[37] or if Template:Math and Template:Math.[38][39]

Concerning the accuracy of Poisson approximation, see Novak,[40] ch. 4, and references therein.

Limiting distributions

Beta distribution

The binomial distribution and beta distribution are different views of the same model of repeated Bernoulli trials. The binomial distribution is the PMF of Template:Mvar successes given Template:Mvar independent events each with a probability Template:Mvar of success. Mathematically, when Template:Math and Template:Math, the beta distribution and the binomial distribution are related byTemplate:Clarification needed a factor of Template:Math: Beta(p;α;β)=(n+1)B(k;n;p)

Beta distributions also provide a family of prior probability distributions for binomial distributions in Bayesian inference:[41] P(p;α,β)=pα1(1p)β1Beta(α,β). Given a uniform prior, the posterior distribution for the probability of success Template:Mvar given Template:Mvar independent events with Template:Mvar observed successes is a beta distribution.[42]

Computational methods

Random number generation

Script error: No such module "labelled list hatnote". Methods for random number generation where the marginal distribution is a binomial distribution are well-established.[43][44] One way to generate random variates samples from a binomial distribution is to use an inversion algorithm. To do so, one must calculate the probability that Template:Math for all values Template:Mvar from Template:Math through Template:Mvar. (These probabilities should sum to a value close to one, in order to encompass the entire sample space.) Then by using a pseudorandom number generator to generate samples uniformly between 0 and 1, one can transform the calculated samples into discrete numbers by using the probabilities calculated in the first step.

History

This distribution was derived by Jacob Bernoulli. He considered the case where Template:Math where Template:Mvar is the probability of success and Template:Mvar and Template:Mvar are positive integers. Blaise Pascal had earlier considered the case where Template:Math, tabulating the corresponding binomial coefficients in what is now recognized as Pascal's triangle.[45]

See also

Script error: No such module "Portal".

Notes

Template:Reflist

References

Template:Reflist

Further reading

  • Script error: No such module "citation/CS1".
  • Script error: No such module "citation/CS1".

External links

Template:Sister projectTemplate:Wikifunctions

Template:ProbDistributions

  1. Script error: No such module "citation/CS1".
  2. Script error: No such module "citation/CS1".
  3. Script error: No such module "citation/CS1".
  4. Script error: No such module "Citation/CS1".
  5. See Proof Wiki
  6. Script error: No such module "citation/CS1".
  7. Script error: No such module "citation/CS1".
  8. Script error: No such module "citation/CS1".
  9. See also Script error: No such module "citation/CS1".
  10. Script error: No such module "Citation/CS1".
  11. Lord, Nick. (July 2010). "Binomial averages when the mean is an integer", The Mathematical Gazette 94, 331-332.
  12. a b Script error: No such module "Citation/CS1".
  13. Script error: No such module "Citation/CS1".
  14. Script error: No such module "Citation/CS1".
  15. a b Script error: No such module "Citation/CS1".
  16. Script error: No such module "citation/CS1".
  17. Script error: No such module "citation/CS1".
  18. Script error: No such module "Citation/CS1".
  19. Marko Lalovic (https://stats.stackexchange.com/users/105848/marko-lalovic), Jeffreys prior for binomial likelihood, URL (version: 2019-03-04): https://stats.stackexchange.com/q/275608
  20. Script error: No such module "Citation/CS1".
  21. a b Script error: No such module "citation/CS1".
  22. Script error: No such module "citation/CS1".
  23. Script error: No such module "citation/CS1".
  24. Script error: No such module "citation/CS1".
  25. Script error: No such module "citation/CS1".
  26. Script error: No such module "citation/CS1".
  27. Script error: No such module "citation/CS1".
  28. Script error: No such module "citation/CS1".
  29. Script error: No such module "Citation/CS1".
  30. Script error: No such module "Citation/CS1".
  31. Script error: No such module "citation/CS1".
  32. a b Script error: No such module "citation/CS1".
  33. Script error: No such module "citation/CS1".
  34. Script error: No such module "citation/CS1".
  35. NIST/SEMATECH, "7.2.4. Does the proportion of defectives meet requirements?" e-Handbook of Statistical Methods.
  36. Script error: No such module "citation/CS1".
  37. Script error: No such module "citation/CS1".
  38. a b NIST/SEMATECH, "6.3.3.1. Counts Control Charts", e-Handbook of Statistical Methods.
  39. Script error: No such module "citation/CS1".
  40. Novak S.Y. (2011) Extreme value methods with applications to finance. London: CRC/ Chapman & Hall/Taylor & Francis. Template:ISBN.
  41. Script error: No such module "citation/CS1".
  42. Script error: No such module "citation/CS1".
  43. Devroye, Luc (1986) Non-Uniform Random Variate Generation, New York: Springer-Verlag. (See especially Chapter X, Discrete Univariate Distributions)
  44. Script error: No such module "Citation/CS1".
  45. Script error: No such module "citation/CS1".
  46. Mandelbrot, B. B., Fisher, A. J., & Calvet, L. E. (1997). A multifractal model of asset returns. 3.2 The Binomial Measure is the Simplest Example of a Multifractal