Multinomial distribution: Difference between revisions

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
Make more clear dice usually not fair
 
 
Line 1: Line 1:
In [[probability theory]], the '''multinomial distribution''' is a generalization of the [[binomial distribution]]. For example, it models the probability of counts for each side of a ''k''-sided die rolled ''n'' times. For ''n'' [[statistical independence|independent]] trials each of which leads to a success for exactly one of ''k'' categories, with each category having a given fixed success probability, the multinomial distribution gives the probability of any particular combination of numbers of successes for the various categories.
When ''k'' is 2 and ''n'' is 1, the multinomial distribution is the [[Bernoulli distribution]]. When ''k'' is 2 and ''n'' is bigger than 1, it is the [[binomial distribution]]. When ''k'' is bigger than 2 and ''n'' is 1, it is the [[categorical distribution]]. The term "multinoulli" is sometimes used for the categorical distribution to emphasize this four-way relationship (so ''n'' determines the suffix, and ''k'' the prefix).
The [[Bernoulli distribution]] models the outcome of a single [[Bernoulli trial]]. In other words, it models whether flipping a (possibly [[Fair coin|biased]]) coin one time will result in either a success (obtaining a head) or failure (obtaining a tail). The [[binomial distribution]] generalizes this to the number of heads from performing ''n'' independent flips (Bernoulli trials) of the same coin. The multinomial distribution models the outcome of ''n'' experiments, where the outcome of each trial has a [[categorical distribution]], such as rolling a (possibly [[Loaded dice|biased]]) ''k''-sided die ''n'' times.
Let ''k'' be a fixed finite number. Mathematically, we have ''k'' possible mutually exclusive outcomes, with corresponding probabilities ''p''<sub>1</sub>, ..., ''p''<sub>''k''</sub>, and ''n'' independent trials. Since the ''k'' outcomes are mutually exclusive and one must occur we have ''p''<sub>''i''</sub>&nbsp;≥&nbsp;0 for ''i''&nbsp;=&nbsp;1,&nbsp;...,&nbsp;''k'' and <math>\sum_{i=1}^k p_i = 1</math>.  Then if the random variables ''X''<sub>''i''</sub> indicate the number of times outcome number ''i'' is observed over the ''n'' trials, the vector ''X''&nbsp;=&nbsp;(''X''<sub>1</sub>,&nbsp;...,&nbsp;''X''<sub>''k''</sub>) follows a multinomial distribution with parameters ''n'' and '''p''', where '''p'''&nbsp;=&nbsp;(''p''<sub>1</sub>,&nbsp;...,&nbsp;''p''<sub>''k''</sub>). While the trials are independent, their outcomes ''X''<sub>''i''</sub> are dependent because they must sum to n.
{{short description|Generalization of the binomial distribution}}
{{short description|Generalization of the binomial distribution}}
{{Probability distribution
{{Probability distribution
Line 24: Line 17:
|skewness  =
|skewness  =
|kurtosis  =
|kurtosis  =
|mgf        = <math>\biggl( \sum_{i=1}^k p_i e^{t_i} \biggr)^n</math>
|mgf        = <math>\left( \sum_{i=1}^k p_i e^{t_i} \right)^n</math>
|char      = <math> \left(\sum_{j=1}^k p_je^{it_j}\right)^n</math> where <math>i^2= -1</math>
|char      = <math> \left(\sum_{j=1}^k p_je^{it_j}\right)^n</math> where <math>i^2= -1</math>
|pgf = <math>\biggl( \sum_{i=1}^k p_i z_i \biggr)^n</math> for <math>(z_1,\ldots,z_k)\in\mathbb{C}^k</math>
|pgf = <math>\left( \sum_{i=1}^k p_i z_i \right)^n</math> for <math>(z_1,\ldots,z_k)\in\mathbb{C}^k</math>
<!-- invalid parameter |conjugate = [[Dirichlet distribution|Dirichlet]]: <math>\mathrm{Dir}(\alpha+\beta)</math> -->
<!-- invalid parameter |conjugate = [[Dirichlet distribution|Dirichlet]]: <math>\mathrm{Dir}(\alpha+\beta)</math> -->
|entropy = <math> -\log(n!) - n\sum_{i=1}^kp_i\log(p_i)+\sum_{i=1}^k\sum_{x_i=0}^n\binom{n}{x_i}p_i^{x_i}(1-p_i)^{n-x_i}\log(x_i!)</math>
|entropy = <math>\begin{align}& -\log(n!) - n\sum_{i=1}^kp_i\log(p_i) \\& + \sum_{i=1}^k\sum_{x_i=0}^n\binom{n}{x_i}p_i^{x_i}(1-p_i)^{n-x_i}\log(x_i!)\end{align}</math>
}}
}}
In [[probability theory]], the '''multinomial distribution''' is a generalization of the [[binomial distribution]]. For example, it models the probability of counts for each side of a ''k''-sided die rolled ''n'' times. For ''n'' [[statistical independence|independent]] trials each of which leads to a success for exactly one of ''k'' categories, with each category having a given fixed success probability, the multinomial distribution gives the probability of any particular combination of numbers of successes for the various categories.
When ''k'' is 2 and ''n'' is 1, the multinomial distribution is the [[Bernoulli distribution]]. When ''k'' is 2 and ''n'' is bigger than 1, it is the [[binomial distribution]]. When ''k'' is bigger than 2 and ''n'' is 1, it is the [[categorical distribution]]. The term "multinoulli" is sometimes used for the categorical distribution to emphasize this four-way relationship (so ''n'' determines the suffix, and ''k'' the prefix).
The [[Bernoulli distribution]] models the outcome of a single [[Bernoulli trial]]. In other words, it models whether flipping a (possibly [[Fair coin|biased]]) coin one time will result in either a success (obtaining a head) or failure (obtaining a tail). The [[binomial distribution]] generalizes this to the number of heads from performing ''n'' independent flips (Bernoulli trials) of the same coin. The multinomial distribution models the outcome of ''n'' experiments, where the outcome of each trial has a [[categorical distribution]], such as rolling a (possibly [[Loaded dice|biased]]) ''k''-sided die ''n'' times.
Let ''k'' be a fixed finite number. Mathematically, we have ''k'' possible mutually exclusive outcomes, with corresponding probabilities ''p''<sub>1</sub>, ..., ''p''<sub>''k''</sub>, and ''n'' independent trials. Since the ''k'' outcomes are mutually exclusive and one must occur we have ''p''<sub>''i''</sub>&nbsp;≥&nbsp;0 for ''i''&nbsp;=&nbsp;1,&nbsp;...,&nbsp;''k'' and <math display="inline">\sum_{i=1}^k p_i = 1</math>.  Then if the random variables ''X''<sub>''i''</sub> indicate the number of times outcome number ''i'' is observed over the ''n'' trials, the vector ''X''&nbsp;=&nbsp;(''X''<sub>1</sub>,&nbsp;...,&nbsp;''X''<sub>''k''</sub>) follows a multinomial distribution with parameters ''n'' and '''p''', where '''p'''&nbsp;=&nbsp;(''p''<sub>1</sub>,&nbsp;...,&nbsp;''p''<sub>''k''</sub>). While the trials are independent, their outcomes ''X''<sub>''i''</sub> are dependent because they must sum to n.


== Definitions ==
== Definitions ==
Line 35: Line 35:
Suppose one does an experiment of extracting ''n'' balls of ''k'' different colors from a bag, replacing the extracted balls after each draw. Balls of the same color are equivalent. Denote the variable which is the number of extracted balls of color ''i'' (''i'' = 1, ..., ''k'') as ''X''<sub>''i''</sub>, and denote as ''p''<sub>''i''</sub> the probability that a given extraction will be in color ''i''. The [[probability mass function]] of this multinomial distribution is:
Suppose one does an experiment of extracting ''n'' balls of ''k'' different colors from a bag, replacing the extracted balls after each draw. Balls of the same color are equivalent. Denote the variable which is the number of extracted balls of color ''i'' (''i'' = 1, ..., ''k'') as ''X''<sub>''i''</sub>, and denote as ''p''<sub>''i''</sub> the probability that a given extraction will be in color ''i''. The [[probability mass function]] of this multinomial distribution is:


: <math> \begin{align}
<math display="block"> \begin{align}
f(x_1,\ldots,x_k;n,p_1,\ldots,p_k) & {} = \Pr(X_1 = x_1 \text{ and } \dots \text{ and } X_k = x_k) \\
f(x_1,\ldots,x_k;n,p_1,\ldots,p_k) & {} = \Pr(X_1 = x_1 \text{ and } \dots \text{ and } X_k = x_k) \\[1ex]
& {} = \begin{cases} { \displaystyle {n! \over x_1!\cdots x_k!}p_1^{x_1}\times\cdots\times p_k^{x_k}}, \quad &
& {} = \begin{cases} { \displaystyle {n! \over x_1!\cdots x_k!}p_1^{x_1}\times\cdots\times p_k^{x_k}}, \quad &
\text{when } \sum_{i=1}^k x_i=n \\  \\
\text{when } \sum_{i=1}^k x_i=n \\  \\
Line 47: Line 47:
The probability mass function can be expressed using the [[gamma function]] as:
The probability mass function can be expressed using the [[gamma function]] as:


:<math>f(x_1,\dots, x_{k}; p_1,\ldots, p_k) = \frac{\Gamma(\sum_i x_i + 1)}{\prod_i \Gamma(x_i+1)} \prod_{i=1}^k p_i^{x_i}.</math>
<math display="block">f(x_1,\dots, x_{k}; p_1,\ldots, p_k) = \frac{\Gamma(\sum_i x_i + 1)}{\prod_i \Gamma(x_i+1)} \prod_{i=1}^k p_i^{x_i}.</math>


This form shows its resemblance to the [[Dirichlet distribution]], which is its [[conjugate prior]].
This form shows its resemblance to the [[Dirichlet distribution]], which is its [[conjugate prior]].
Line 55: Line 55:
Suppose that in a three-way election for a large country, candidate A received 20% of the votes, candidate B received 30% of the votes, and candidate C received 50% of the votes.  If six voters are selected randomly, what is the probability that there will be exactly one supporter for candidate A, two supporters for candidate B and three supporters for candidate C in the sample?
Suppose that in a three-way election for a large country, candidate A received 20% of the votes, candidate B received 30% of the votes, and candidate C received 50% of the votes.  If six voters are selected randomly, what is the probability that there will be exactly one supporter for candidate A, two supporters for candidate B and three supporters for candidate C in the sample?


''Note: Since we’re assuming that the voting population is large, it is reasonable and permissible to think of the probabilities as unchanging once a voter is selected for the sample.  Technically speaking this is sampling without replacement, so the correct distribution is the [[Hypergeometric distribution#Multivariate hypergeometric distribution|multivariate hypergeometric distribution]], but the distributions converge as the population grows large in comparison to a fixed sample size''<ref>{{Cite web |title=probability - multinomial distribution sampling |url=https://stats.stackexchange.com/a/335239/307588 |access-date=2022-07-28 |website=Cross Validated |language=en}}</ref>''.''
''Note: Since we’re assuming that the voting population is large, it is reasonable and permissible to think of the probabilities as unchanging once a voter is selected for the sample.  Technically speaking this is sampling without replacement, so the correct distribution is the [[Hypergeometric distribution#Multivariate hypergeometric distribution|multivariate hypergeometric distribution]], but the distributions converge as the population grows large in comparison to a fixed sample size''.<ref>{{Cite web |title=probability - multinomial distribution sampling |url=https://stats.stackexchange.com/a/335239/307588 |access-date=2022-07-28 |website=Cross Validated |language=en}}</ref>


: <math> \Pr(A=1,B=2,C=3) = \frac{6!}{1! 2! 3!}(0.2^1) (0.3^2) (0.5^3) = 0.135 </math>
<math display="block"> \Pr(A{=}1, B{=}2, C{=}3) = \frac{6!}{1! 2! 3!} \left(0.2^1\right) \left(0.3^2\right) \left(0.5^3\right) = 0.135 </math>


== Properties ==
== Properties ==
Line 65: Line 65:
The multinomial distribution is normalized according to:
The multinomial distribution is normalized according to:


:<math>\sum_{\sum_{j=1}^k x_j=n} f(x_1,...,x_k;n,p_1,...,p_k) = 1</math>
<math display="block">\sum_{\sum_{j=1}^k x_j=n} f(x_1,\dots,x_k; n, p_1,\dots,p_k) = 1</math>


where the sum is over all permutations of <math>x_j</math>such that <math> \sum_{j=1}^k x_j=n </math>.
where the sum is over all permutations of <math>x_j</math> such that <math display="inline"> \sum_{j=1}^k x_j=n </math>.


=== Expected value and variance ===
=== Expected value and variance ===
The [[Expected value|expected]] number of times the outcome ''i'' was observed over ''n'' trials is
The [[Expected value|expected]] number of times the outcome ''i'' was observed over ''n'' trials is


:<math>\operatorname{E}(X_i) = n p_i.\,</math>
<math display="block">\operatorname{E}(X_i) = n p_i.\,</math>


The [[covariance matrix]] is as follows.  Each diagonal entry is the [[variance]] of a binomially distributed random variable, and is therefore
The [[covariance matrix]] is as follows.  Each diagonal entry is the [[variance]] of a binomially distributed random variable, and is therefore


:<math>\operatorname{Var}(X_i)=np_i(1-p_i).\,</math>
<math display="block">\operatorname{Var}(X_i) = n p_i(1-p_i).\,</math>


The off-diagonal entries are the [[covariance]]s:
The off-diagonal entries are the [[covariance]]s:


:<math>\operatorname{Cov}(X_i,X_j)=-np_i p_j\,</math>
<math display="block">\operatorname{Cov}(X_i,X_j) = - n p_i p_j\,</math>


for ''i'', ''j'' distinct.
for ''i'', ''j'' distinct.
Line 90: Line 90:
The entries of the corresponding [[Correlation matrix#Correlation matrices|correlation matrix]] are
The entries of the corresponding [[Correlation matrix#Correlation matrices|correlation matrix]] are


:<math>\rho(X_i,X_i) = 1.</math>
<math display="block">\begin{align}
 
\rho(X_i,X_i) &= 1 \\[1ex]
:<math>\rho(X_i,X_j) = \frac{\operatorname{Cov}(X_i,X_j)}{\sqrt{\operatorname{Var}(X_i)\operatorname{Var}(X_j)}} = \frac{-p_i  p_j}{\sqrt{p_i(1-p_i) p_j(1-p_j)}} = -\sqrt{\frac{p_i  p_j}{(1-p_i)(1-p_j)}}.</math>
\rho(X_i,X_j) &= \frac{\operatorname{Cov}(X_i,X_j)}{\sqrt{\operatorname{Var}(X_i)\operatorname{Var}(X_j)}} \\
&= \frac{-p_i  p_j}{\sqrt{p_i(1-p_i) p_j(1-p_j)}} \\
&= -\sqrt{\frac{p_i  p_j}{(1-p_i)(1-p_j)}}.
\end{align}</math>


Note that the number of trials ''n'' drops out of this expression.
Note that the number of trials ''n'' drops out of this expression.
Line 100: Line 103:
The [[Support (mathematics)|support]] of the multinomial distribution is the set
The [[Support (mathematics)|support]] of the multinomial distribution is the set


: <math>\{(n_1,\dots,n_k)\in \mathbb{N}^k \mid  n_1+\cdots+n_k=n\}.\,</math>
<math display="block">\left\{(n_1,\dots,n_k) \in \mathbb{N}^k \mid  n_1 + \cdots + n_k = n\right\}.</math>


Its number of elements is
Its number of elements is


: <math>{n+k-1 \choose k-1}.</math>
<math display="block">\binom{n+k-1}{k-1}.</math>


=== Matrix notation ===
=== Matrix notation ===
In matrix notation,  
In matrix notation,  
:<math>\operatorname{E}(\mathbf{X}) = n \mathbf{p},\,</math>
<math display="block">\operatorname{E}(\mathbf{X}) = n \mathbf{p},\,</math>


and  
and  
:<math>\operatorname{Var}(\mathbf{X}) = n \lbrace \operatorname{diag}(\mathbf{p}) - \mathbf{p} \mathbf{p}^{\rm T} \rbrace ,\,</math>
<math display="block">\operatorname{Var}(\mathbf{X}) = n \lbrace \operatorname{diag}(\mathbf{p}) - \mathbf{p} \mathbf{p}^{\rm T} \rbrace ,\,</math>


with {{math|'''p'''<sup>T</sup>}} = the row vector transpose of the column vector {{math|'''p'''}}.
with {{math|'''p'''<sup>T</sup>}} = the [[Row and column vectors|row vector]] transpose of the column vector {{math|'''p'''}}.


=== Visualization ===
=== Visualization ===
Line 127: Line 130:
==== Asymptotics ====
==== Asymptotics ====


By [[Stirling's formula]], at the limit of <math>n, x_1, ..., x_k \to \infty</math>, we have<math display="block">\ln \binom{n}{x_1, \cdots, x_k} + \sum_{i=1}^k x_i\ln p_i = -n D_{KL}(\hat p \| p) - \frac{k-1}{2} \ln(2\pi n) - \frac 12 \sum_{i=1}^k \ln(\hat p_i)  + o(1)</math>where relative frequencies <math>\hat p_i = x_i/n</math> in the data can be interpreted as probabilities from the empirical distribution <math>\hat p</math>, and <math>D_{KL}</math> is the [[Kullback–Leibler divergence]].
By [[Stirling's formula]], at the limit of <math>n, x_1, \dots, x_k \to \infty</math>, we have<math display="block">\ln \binom{n}{x_1, \dots, x_k} + \sum_{i=1}^k x_i\ln p_i = -n D_\text{KL}(\hat p \| p) - \frac{k-1}{2} \ln(2\pi n) - \frac 12 \sum_{i=1}^k \ln(\hat p_i)  + o(1)</math>where relative frequencies <math>\hat p_i = x_i/n</math> in the data can be interpreted as probabilities from the empirical distribution <math>\hat p</math>, and <math>D_\text{KL}</math> is the [[Kullback–Leibler divergence]].


This formula can be interpreted as follows.
This formula can be interpreted as follows.


Consider <math>\Delta_k</math>, the space of all possible distributions over the categories <math>\{1, 2, ..., k\}</math>. It is a [[simplex]]. After <math>n</math> independent samples from the categorical distribution <math>p</math> (which is how we construct the multinomial distribution), we obtain an empirical distribution <math>\hat p</math>.
Consider <math>\Delta_k</math>, the space of all possible distributions over the categories <math>\{1, 2, \dots, k\}</math>. It is a [[simplex]]. After <math>n</math> independent samples from the categorical distribution <math>p</math> (which is how we construct the multinomial distribution), we obtain an empirical distribution <math>\hat p</math>.


By the asymptotic formula, the probability that empirical distribution <math>\hat p</math> deviates from the actual distribution <math>p</math> decays exponentially, at a rate <math> n D_{KL}(\hat p \| p)</math>. The more experiments and the more different <math>\hat p</math> is from <math>p</math>, the less likely it is to see such an empirical distribution.
By the asymptotic formula, the probability that empirical distribution <math>\hat p</math> deviates from the actual distribution <math>p</math> decays exponentially as we sample more data, at a rate of <math> D_\text{KL}(\hat p \| p)</math>. The more experiments and the more different <math>\hat p</math> is from <math>p</math>, the less likely it is to see such an empirical distribution.


If <math>A</math> is a closed subset of <math>\Delta_k</math>, then by dividing up <math>A</math> into pieces, and reasoning about the growth rate of <math>Pr(\hat p \in A_\epsilon)</math> on each piece <math>A_\epsilon</math>, we obtain [[Sanov's theorem]], which states that<math display="block">\lim_{n \to \infty} \frac 1n \ln Pr(\hat p \in A)  = - \inf_{\hat p \in A} D_{KL}(\hat p \| p)</math>
If <math>A</math> is a closed subset of <math>\Delta_k</math>, then by dividing up <math>A</math> into pieces, and reasoning about the growth rate of <math>Pr(\hat p \in A_\epsilon)</math> on each piece <math>A_\epsilon</math>, we obtain [[Sanov's theorem]], which states that
<math display="block">\lim_{n \to \infty} \frac{1}{n} \ln \Pr(\hat p \in A)  = - \inf_{\hat p \in A} D_\text{KL}(\hat p \| p)</math>


==== Concentration at large ''n'' ====
==== Concentration at large ''n'' ====


Due to the exponential decay, at large <math>n</math>, almost all the probability mass is concentrated in a small neighborhood of <math>p</math>. In this small neighborhood, we can take the first nonzero term in the Taylor expansion of <math>D_{KL}</math>, to obtain<math display="block">\ln \binom{n}{x_1, \cdots, x_k} p_1^{x_1} \cdots p_k^{x_k} \approx -\frac n2 \sum_{i=1}^k \frac{(\hat p_i - p_i)^2}{p_i} = -\frac 12 \sum_{i=1}^k \frac{(x_i - n p_i)^2}{n p_i}</math>This resembles the gaussian distribution, which suggests the following theorem:
Due to the [[exponential decay]], at large <math>n</math>, almost all the probability mass is concentrated in a small neighborhood of <math>p</math>. In this small neighborhood, we can take the first nonzero term in the [[Taylor series|Taylor expansion]] of <math>D_{KL}</math>, to obtain<math display="block">\begin{align}
\ln \binom{n}{x_1, \cdots, x_k} p_1^{x_1} \cdots p_k^{x_k} &\approx -\frac n2 \sum_{i=1}^k \frac{(\hat p_i - p_i)^2}{p_i} \\
&= -\frac 12 \sum_{i=1}^k \frac{(x_i - n p_i)^2}{n p_i}
\end{align}</math>
This resembles the Gaussian distribution, which suggests the following theorem:


'''Theorem.''' At the <math>n \to \infty</math> limit, <math>n \sum_{i=1}^k \frac{(\hat p_i - p_i)^2}{p_i} = \sum_{i=1}^k \frac{(x_i - n p_i)^2}{n p_i}</math> [[converges in distribution]] to the [[chi-squared distribution]] <math>\chi^2(k-1)</math>.
'''Theorem.''' At the <math>n \to \infty</math> limit, <math>n \sum_{i=1}^k \frac{(\hat p_i - p_i)^2}{p_i} = \sum_{i=1}^k \frac{(x_i - n p_i)^2}{n p_i}</math> [[converges in distribution]] to the [[chi-squared distribution]] <math>\chi^2(k-1)</math>.
[[File:Convergence of multinomial distribution to the gaussian distribution.webm|thumb|339x339px|If we sample from the multinomial distribution <math>\mathrm{Multinomial}(n; 0.2, 0.3, 0.5)</math>, and plot the heatmap of the samples within the 2-dimensional simplex (here shown as a black triangle), we notice that as <math>n \to \infty</math>, the distribution converges to a gaussian around the point <math>(0.2, 0.3, 0.5)</math>, with the contours converging in shape to ellipses, with radii converging as <math>1/\sqrt n</math>. Meanwhile, the separation between the discrete points converge as <math>1/n</math>, and so the discrete multinomial distribution converges to a continuous gaussian distribution.]]
[[File:Convergence of multinomial distribution to the gaussian distribution.webm|thumb|339x339px|If we sample from the multinomial distribution <math>\mathrm{Multinomial}(n; 0.2, 0.3, 0.5)</math>, and plot the heatmap of the samples within the 2-dimensional simplex (here shown as a black triangle), we notice that as <math>n \to \infty</math>, the distribution converges to a Gaussian around the point <math>(0.2, 0.3, 0.5)</math>, with the contours converging in shape to ellipses, with radii converging as <math>1/\sqrt n</math>. Meanwhile, the separation between the discrete points converge as <math>1/n</math>, and so the discrete multinomial distribution converges to a continuous Gaussian distribution.]]


{{hidden begin|style=width:100%|ta1=center|border=1px #aaa solid|title=[Proof]}}
{{hidden begin|style=width:100%|ta1=center|border=1px #aaa solid|title=[Proof]}}
Line 148: Line 156:
The space of all distributions over categories <math>\{1, 2, \ldots, k\}</math> is a [[simplex]]: <math>\Delta_{k} = \left\{(y_1, \ldots, y_k)\colon y_1, \ldots, y_k \geq 0, \sum_i y_i = 1\right\}</math>, and the set of all possible empirical distributions after <math>n</math> experiments is a subset of the simplex: <math>\Delta_{k, n} = \left\{(x_1/n, \ldots, x_k/n)\colon x_1, \ldots, x_k \in \N, \sum_i x_i = n\right\}</math>. That is, it is the intersection between <math>\Delta_k</math> and the lattice <math>(\Z^k)/n</math>.
The space of all distributions over categories <math>\{1, 2, \ldots, k\}</math> is a [[simplex]]: <math>\Delta_{k} = \left\{(y_1, \ldots, y_k)\colon y_1, \ldots, y_k \geq 0, \sum_i y_i = 1\right\}</math>, and the set of all possible empirical distributions after <math>n</math> experiments is a subset of the simplex: <math>\Delta_{k, n} = \left\{(x_1/n, \ldots, x_k/n)\colon x_1, \ldots, x_k \in \N, \sum_i x_i = n\right\}</math>. That is, it is the intersection between <math>\Delta_k</math> and the lattice <math>(\Z^k)/n</math>.


As <math>n</math> increases, most of the probability mass is concentrated in a subset of <math>\Delta_{k, n}</math> near <math>p</math>, and the probability distribution near <math>p</math> becomes well-approximated by <math display="block">\binom{n}{x_1, \cdots, x_k} p_1^{x_1} \cdots p_k^{x_k} \approx e^{-\frac n2 \sum_i \frac{(\hat p_i - p_i)^2}{p_i}}</math>From this, we see that the subset upon which the mass is concentrated has radius on the order of <math>1/\sqrt n</math>, but the points in the subset are separated by distance on the order of <math>1/n</math>, so at large <math>n</math>, the points merge into a continuum.
As <math>n</math> increases, most of the probability mass is concentrated in a subset of <math>\Delta_{k, n}</math> near <math>p</math>, and the probability distribution near <math>p</math> becomes well-approximated by <math display="block">\binom{n}{x_1, \cdots, x_k} p_1^{x_1} \cdots p_k^{x_k} \approx e^{-\frac{n}{2} \sum_i \frac{\left(\hat p_i - p_i\right)^2}{p_i}}</math>From this, we see that the subset upon which the mass is concentrated has radius on the order of <math>1/\sqrt n</math>, but the points in the subset are separated by distance on the order of <math>1/n</math>, so at large <math>n</math>, the points merge into a continuum.
To convert this from a discrete probability distribution to a continuous probability density, we need to multiply by the volume occupied by each point of <math>\Delta_{k, n}</math> in <math>\Delta_k</math>. However, by symmetry, every point occupies exactly the same volume (except a negligible set on the boundary), so we obtain a probability density <math>\rho(\hat p) = C e^{-\frac n2 \sum_i \frac{(\hat p_i - p_i)^2}{p_i}}</math>, where <math>C</math> is a constant.
To convert this from a discrete probability distribution to a continuous probability density, we need to multiply by the volume occupied by each point of <math>\Delta_{k, n}</math> in <math>\Delta_k</math>. However, by symmetry, every point occupies exactly the same volume (except a negligible set on the boundary), so we obtain a probability density <math>\rho(\hat p) = C e^{-\frac n2 \sum_i \frac{\left(\hat p_i - p_i\right)^2}{p_i}}</math>, where <math>C</math> is a constant.


Finally, since the simplex <math>\Delta_k</math> is not all of <math>\R^k</math>, but only within a <math>(k-1)</math>-dimensional plane, we obtain the desired result.
Finally, since the simplex <math>\Delta_k</math> is not all of <math>\R^k</math>, but only within a <math>(k-1)</math>-dimensional plane, we obtain the desired result.
Line 156: Line 164:


==== Conditional concentration at large ''n'' ====
==== Conditional concentration at large ''n'' ====
The above concentration phenomenon can be easily generalized to the case where we condition upon linear constraints. This is the theoretical justification for [[Pearson's chi-squared test]].
The above concentration phenomenon can be easily generalized to the case where we condition upon independent constraints. This is the theoretical justification for [[Pearson's chi-squared test]].


'''Theorem.''' Given frequencies <math>x_i\in\mathbb N</math> observed in a dataset with <math>n</math> points, we impose <math>\ell + 1</math> [[Linear independence|independent linear]] constraints <math display="block">\begin{cases}
'''Theorem.'''
\sum_i \hat p_i = 1, \\
\sum_i a_{1i} \hat p_i = b_1, \\
\sum_i a_{2i} \hat p_i = b_2, \\
\cdots, \\
\sum_i a_{\ell i} \hat p_i = b_{\ell}
\end{cases} </math>(notice that the first constraint is simply the requirement that the empirical distributions sum to one), such that empirical <math>\hat p_i=x_i/n</math> satisfy all these constraints simultaneously. Let <math>q</math> denote the <math>I</math>-projection of prior distribution <math>p</math> on the sub-region of the simplex allowed by the linear constraints. At the <math>n \to \infty</math> limit, sampled counts <math>n \hat p_i</math> from the multinomial distribution '''conditional on''' the linear constraints  are governed by <math>2n D_{KL}(\hat p \vert\vert q) \approx n \sum_i \frac{(\hat p_i - q_i)^2}{q_i}</math> which [[converges in distribution]] to the [[chi-squared distribution]] <math>\chi^2(k-1-\ell)</math>.
 
{{hidden begin|style=width:100%|ta1=center|border=1px #aaa solid|title=[Proof]}}


An analogous proof applies in this Diophantine problem of coupled linear equations in count variables <math>n \hat p_i</math>,<ref>{{cite arXiv|last1=Loukas|first1=Orestis|last2=Chung|first2=Ho Ryun|date=2023|title=Total Empiricism: Learning from Data|eprint=2311.08315|class=math.ST}}</ref> but this time <math>\Delta_{k, n}</math> is the intersection of <math>(\Z^k)/n</math> with <math>\Delta_k </math> and <math>\ell </math> hyperplanes, all linearly independent, so the probability density <math>\rho(\hat p)</math> is restricted to a <math>(k-\ell-1)</math>-dimensional plane. In particular, expanding the KL divergence <math>D_{KL}(\hat p\vert\vert p)</math> around its minimum <math>q</math> (the <math>I</math>-projection of <math>p</math> on <math>\Delta_{k, n}</math>) in the constrained problem ensures by the Pythagorean theorem for <math>I</math>-divergence that any constant and linear term in the counts <math>n \hat p_i</math> vanishes from the conditional probability to multinationally sample those counts.
* Given functions <math>f_1, \dots, f_\ell</math>, such that they are continuously differentiable in a neighborhood of <math>p</math>, and the vectors <math>(1, 1, \dots, 1), \nabla f_1(p), \dots, \nabla f_\ell(p)</math> are linearly independent;
* given sequences <math>\epsilon_1(n), \dots, \epsilon_\ell(n)</math>, such that asymptotically <math>\frac 1n \ll \epsilon_i(n) \ll \frac{1}{\sqrt n}</math> for each <math>i \in \{1, \dots, \ell\}</math>;
* then for the multinomial distribution '''conditional on''' constraints <math>f_1(\hat p) \in [f_1(p)- \epsilon_1(n), f_1(p) + \epsilon_1(n)], \dots, f_\ell(\hat p) \in [f_\ell(p)- \epsilon_\ell(n), f_\ell(p) + \epsilon_\ell(n)]</math>, we have the quantity <math>n \sum_i \frac{(\hat p_i - p_i)^2}{p_i} = \sum_i \frac{(x_i - n p_i)^2}{n p_i}</math> converging in distribution to <math>\chi^2(k-1-\ell)</math> at the <math>n \to \infty</math> limit.


Notice that  
In the case that all <math>\hat p_i</math> are equal, this reduces to the concentration of entropies around the [[Maximum entropy probability distribution|maximum entropy]].<ref>{{cite arXiv|last1=Loukas|first1=Orestis|last2=Chung|first2=Ho Ryun|date=April 2022|title=Categorical Distributions of Maximum Entropy under Marginal Constraints|class=hep-th |eprint=2204.03406}}</ref><ref>{{cite arXiv|last1=Loukas|first1=Orestis|last2=Chung|first2=Ho Ryun|date=June 2022|title=Entropy-based Characterization of Modeling Constraints|class=stat.ME |eprint=2206.14105}}</ref>
by definition, every one of <math>\hat p_1, \hat p_2, ..., \hat p_k</math> must be a rational number,  
whereas <math>p_1, p_2, ..., p_k</math> may be chosen from any real number in <math>[0, 1]</math> and need not satisfy the Diophantine system of equations.  
Only asymptotically as <math>n\rightarrow\infty</math>, the <math>\hat p_i</math>'s can be regarded as probabilities over <math>[0, 1]</math>.


{{hidden end}}
This theorem can be shown by starting with the previous case, then taking the conditional on the constraints.  
 
Away from empirically observed constraints <math>b_1,\ldots,b_\ell</math> (such as moments or prevalences) the theorem can be generalized:
 
'''Theorem.'''
 
* Given functions <math>f_1, ..., f_\ell</math>, such that they are continuously differentiable in a neighborhood of <math>p</math>, and the vectors <math>(1, 1, ..., 1), \nabla f_1(p), ..., \nabla f_\ell(p)</math> are linearly independent;
* given sequences <math>\epsilon_1(n), ..., \epsilon_\ell(n)</math>, such that asymptotically <math>\frac 1n \ll \epsilon_i(n) \ll \frac{1}{\sqrt n}</math> for each <math>i \in \{1, ..., \ell\}</math>;
* then for the multinomial distribution '''conditional on''' constraints <math>f_1(\hat p) \in [f_1(p)- \epsilon_1(n), f_1(p) + \epsilon_1(n)], ..., f_\ell(\hat p) \in [f_\ell(p)- \epsilon_\ell(n), f_\ell(p) + \epsilon_\ell(n)]</math>, we have the quantity <math>n \sum_i \frac{(\hat p_i - p_i)^2}{p_i} = \sum_i \frac{(x_i - n p_i)^2}{n p_i}</math> converging in distribution to <math>\chi^2(k-1-\ell)</math> at the <math>n \to \infty</math> limit.
 
In the case that all <math>\hat p_i</math> are equal, the Theorem reduces to the concentration of entropies around the Maximum Entropy.<ref>{{cite arXiv|last1=Loukas|first1=Orestis|last2=Chung|first2=Ho Ryun|date=April 2022|title=Categorical Distributions of Maximum Entropy under Marginal Constraints|class=hep-th |eprint=2204.03406}}</ref><ref>{{cite arXiv|last1=Loukas|first1=Orestis|last2=Chung|first2=Ho Ryun|date=June 2022|title=Entropy-based Characterization of Modeling Constraints|class=stat.ME |eprint=2206.14105}}</ref>


==Related distributions==
==Related distributions==
Line 204: Line 193:
The goal of equivalence testing is to establish the agreement between a theoretical multinomial distribution and  observed counting frequencies. The theoretical distribution may be a fully specified multinomial distribution or a parametric family of multinomial distributions.
The goal of equivalence testing is to establish the agreement between a theoretical multinomial distribution and  observed counting frequencies. The theoretical distribution may be a fully specified multinomial distribution or a parametric family of multinomial distributions.


Let <math>q</math> denote a theoretical multinomial distribution and let <math>p</math> be a true underlying distribution. The distributions <math>p</math> and <math>q</math> are considered equivalent if <math>d(p,q)<\varepsilon</math> for a distance <math>d</math> and a tolerance parameter <math>\varepsilon>0</math>. The equivalence test problem is <math>H_0=\{d(p,q)\geq\varepsilon\}</math> versus <math>H_1=\{d(p,q)<\varepsilon\}</math>. The true underlying distribution <math>p</math> is unknown. Instead, the counting frequencies <math>p_n</math> are observed, where <math>n</math> is a sample size. An equivalence test uses <math>p_n</math> to reject <math>H_0</math>. If <math>H_0</math> can be rejected then the equivalence between <math>p</math> and <math>q</math> is shown at a given significance level. The equivalence test for Euclidean distance can be found in text book of Wellek (2010).<ref>{{Cite book|title=Testing statistical hypotheses of equivalence and noninferiority|last=Wellek|first=Stefan|publisher=Chapman and Hall/CRC|year=2010|isbn=978-1439808184}}</ref> The equivalence test for the total variation distance is developed in Ostrovski (2017).<ref>{{cite journal|last1=Ostrovski|first1=Vladimir|date=May 2017|title=Testing equivalence of multinomial distributions|journal=Statistics & Probability Letters|volume=124|pages=77–82|doi=10.1016/j.spl.2017.01.004|s2cid=126293429}}[http://dx.doi.org/10.1016/j.spl.2017.01.004 Official web link (subscription required)]. [https://www.researchgate.net/publication/312481284_Testing_equivalence_of_multinomial_distributions Alternate, free web link].</ref> The exact equivalence test for the specific cumulative distance is proposed in Frey (2009).<ref>{{cite journal|last1=Frey|first1=Jesse|date=March 2009|title=An exact multinomial test for equivalence|journal=The Canadian Journal of Statistics|volume=37|pages=47–59|doi=10.1002/cjs.10000|s2cid=122486567 }}[http://www.jstor.org/stable/25653460 Official web link (subscription required)].</ref>
Let <math>q</math> denote a theoretical multinomial distribution and let <math>p</math> be a true underlying distribution. The distributions <math>p</math> and <math>q</math> are considered equivalent if <math>d(p,q) < \varepsilon</math> for a distance <math>d</math> and a tolerance parameter <math>\varepsilon>0</math>. The equivalence test problem is <math>H_0 = \{d(p,q) \geq \varepsilon\}</math> versus <math>H_1 = \{d(p,q) < \varepsilon\}</math>. The true underlying distribution <math>p</math> is unknown. Instead, the counting frequencies <math>p_n</math> are observed, where <math>n</math> is a sample size. An equivalence test uses <math>p_n</math> to reject <math>H_0</math>. If <math>H_0</math> can be rejected then the equivalence between <math>p</math> and <math>q</math> is shown at a given significance level. The equivalence test for Euclidean distance can be found in text book of Wellek (2010).<ref>{{Cite book|title=Testing statistical hypotheses of equivalence and noninferiority|last=Wellek|first=Stefan|publisher=Chapman and Hall/CRC|year=2010|isbn=978-1439808184}}</ref> The equivalence test for the total variation distance is developed in Ostrovski (2017).<ref>{{cite journal|last1=Ostrovski|first1=Vladimir|date=May 2017|title=Testing equivalence of multinomial distributions|journal=Statistics & Probability Letters|volume=124|pages=77–82|doi=10.1016/j.spl.2017.01.004|s2cid=126293429}}[http://dx.doi.org/10.1016/j.spl.2017.01.004 Official web link (subscription required)]. [https://www.researchgate.net/publication/312481284_Testing_equivalence_of_multinomial_distributions Alternate, free web link].</ref> The exact equivalence test for the specific cumulative distance is proposed in Frey (2009).<ref>{{cite journal|last1=Frey|first1=Jesse|date=March 2009|title=An exact multinomial test for equivalence|journal=The Canadian Journal of Statistics|volume=37|pages=47–59|doi=10.1002/cjs.10000|s2cid=122486567 }}[http://www.jstor.org/stable/25653460 Official web link (subscription required)].</ref>


The distance between the true underlying distribution <math>p</math> and a family of the multinomial distributions <math>\mathcal{M}</math> is defined by <math>d(p, \mathcal{M})=\min_{h\in\mathcal{M}}d(p,h)  </math>. Then the equivalence test problem is given by <math>H_0=\{d(p,\mathcal{M})\geq \varepsilon\}</math> and <math>H_1=\{d(p,\mathcal{M})< \varepsilon\}</math>. The distance <math>d(p,\mathcal{M})</math> is usually computed using numerical optimization. The tests for this case are developed recently in Ostrovski (2018).<ref>{{cite journal|last1=Ostrovski|first1=Vladimir|date=March 2018|title=Testing equivalence to families of multinomial distributions with application to the independence model|journal=Statistics & Probability Letters|volume=139|pages=61–66|doi=10.1016/j.spl.2018.03.014|s2cid=126261081}}[https://doi.org/10.1016/j.spl.2018.03.014 Official web link (subscription required)]. [https://www.researchgate.net/publication/324124605_Testing_equivalence_to_families_of_multinomial_distributions_with_application_to_the_independence_model Alternate, free web link].</ref>
The distance between the true underlying distribution <math>p</math> and a family of the multinomial distributions <math>\mathcal{M}</math> is defined by <math>d(p, \mathcal{M})=\min_{h\in\mathcal{M}}d(p,h)  </math>. Then the equivalence test problem is given by <math>H_0=\{d(p,\mathcal{M})\geq \varepsilon\}</math> and <math>H_1=\{d(p,\mathcal{M})< \varepsilon\}</math>. The distance <math>d(p,\mathcal{M})</math> is usually computed using numerical optimization. The tests for this case are developed recently in Ostrovski (2018).<ref>{{cite journal|last1=Ostrovski|first1=Vladimir|date=March 2018|title=Testing equivalence to families of multinomial distributions with application to the independence model|journal=Statistics & Probability Letters|volume=139|pages=61–66|doi=10.1016/j.spl.2018.03.014|s2cid=126261081}}[https://doi.org/10.1016/j.spl.2018.03.014 Official web link (subscription required)]. [https://www.researchgate.net/publication/324124605_Testing_equivalence_to_families_of_multinomial_distributions_with_application_to_the_independence_model Alternate, free web link].</ref>
Line 228: Line 217:
  | location = Hoboken, N.J
  | location = Hoboken, N.J
  | pages = 760
  | pages = 760
}}</ref>{{rp|378}}<ref>{{Cite journal
}}</ref>{{rp|p=378}}<ref>{{Cite journal
  | last1 = Newcombe
  | last1 = Newcombe
  | first1 = R. G.
  | first1 = R. G.
Line 238: Line 227:
  | pages = 873–890
  | pages = 873–890
  | doi = 10.1002/(SICI)1097-0258(19980430)17:8<873::AID-SIM779>3.0.CO;2-I
  | doi = 10.1002/(SICI)1097-0258(19980430)17:8<873::AID-SIM779>3.0.CO;2-I
| pmid = 9595617
| pmid = 9595617
  }}</ref>
  }}</ref>


<math>
<math display="block">
\widehat{\operatorname{SE}(\hat{p}_i - \hat{p}_j)} = \sqrt{\frac{(\hat{p}_i + \hat{p}_j) - (\hat{p}_i - \hat{p}_j)^2}{n}}
\widehat{\operatorname{SE}}(\hat{p}_i - \hat{p}_j) = \sqrt{\frac{\left(\hat{p}_i + \hat{p}_j\right) - \left(\hat{p}_i - \hat{p}_j\right)^2}{n}}
</math>
</math>


For a <math>100(1 - \alpha)\%</math> [[Confidence interval#Approximate confidence intervals|approximate confidence interval]], the [[margin of error]] may incorporate the appropriate quantile from the [[standard normal distribution]], as follows:
For a <math>100(1 - \alpha)\%</math> [[Confidence interval#Approximate confidence intervals|approximate confidence interval]], the [[margin of error]] may incorporate the appropriate quantile from the [[standard normal distribution]], as follows:


<math>(\hat{p}_i - \hat{p}_j) \pm z_{\alpha/2} \cdot \widehat{\operatorname{SE}(\hat{p}_i - \hat{p}_j)}</math>
<math display="block">(\hat{p}_i - \hat{p}_j) \pm z_{\alpha/2} \cdot \widehat{\operatorname{SE}}(\hat{p}_i - \hat{p}_j)</math>


{{hidden begin|style=width:100%|ta1=center|border=1px #aaa solid|title=[Proof]}}
{{hidden begin|style=width:100%|ta1=center|border=1px #aaa solid|title=[Proof]}}
Line 253: Line 242:


The SE can be constructed using the calculus of [[Variance#Addition and multiplication by a constant|the variance of the difference of two random variables]]:
The SE can be constructed using the calculus of [[Variance#Addition and multiplication by a constant|the variance of the difference of two random variables]]:
<math>
<math display="block">\begin{align}
\begin{align}
\widehat{\operatorname{SE}}(\hat{p}_i - \hat{p}_j) & = \sqrt{\frac{\hat{p}_i (1 - \hat{p}_i)}{n} + \frac{\hat{p}_j (1 - \hat{p}_j)}{n} - 2\left(-\frac{\hat{p}_i \hat{p}_j}{n}\right)} \\
\widehat{\operatorname{SE}(\hat{p}_i - \hat{p}_j)} & = \sqrt{\frac{\hat{p}_i (1 - \hat{p}_i)}{n} + \frac{\hat{p}_j (1 - \hat{p}_j)}{n} - 2\left(-\frac{\hat{p}_i \hat{p}_j}{n}\right)} \\
& = \sqrt{\frac{1}{n} \left(\hat{p}_i + \hat{p}_j - \hat{p}_i^2 - \hat{p}_j^2 + 2\hat{p}_i \hat{p}_j\right)} \\
& = \sqrt{\frac{1}{n} \left(\hat{p}_i + \hat{p}_j - \hat{p}_i^2 - \hat{p}_j^2 + 2\hat{p}_i \hat{p}_j\right)} \\
& = \sqrt{\frac{(\hat{p}_i + \hat{p}_j) - (\hat{p}_i - \hat{p}_j)^2}{n}}
& = \sqrt{\frac{(\hat{p}_i + \hat{p}_j) - (\hat{p}_i - \hat{p}_j)^2}{n}}
\end{align}
\end{align}</math>
</math>


{{hidden end}}
{{hidden end}}


A modification which includes a [[continuity correction]] adds <math>\frac{1}{n}</math> to the margin of error as follows:<ref name=pass_sample_size_software>{{Cite web|url=https://www.ncss.com/wp-content/themes/ncss/pdf/Procedures/PASS/Confidence_Intervals_for_the_Difference_Between_Two_Correlated_Proportions.pdf|title=Confidence Intervals for the Difference Between Two Correlated Proportions|publisher=NCSS|access-date=2022-03-22}}</ref>{{rp|102–3}}
A modification which includes a [[continuity correction]] adds <math>\frac{1}{n}</math> to the margin of error as follows:<ref name=pass_sample_size_software>{{Cite web|url=https://www.ncss.com/wp-content/themes/ncss/pdf/Procedures/PASS/Confidence_Intervals_for_the_Difference_Between_Two_Correlated_Proportions.pdf | title=Confidence Intervals for the Difference Between Two Correlated Proportions |publisher=NCSS|access-date=2022-03-22}}</ref>{{rp|pp=102–103}}


<math>(\hat{p}_i - \hat{p}_j) \pm \left(z_{\alpha/2} \cdot \widehat{\operatorname{SE}(\hat{p}_i - \hat{p}_j)} + \frac{1}{n}\right)</math>
<math display="block">(\hat{p}_i - \hat{p}_j) \pm \left(z_{\alpha/2} \cdot \widehat{\operatorname{SE}}(\hat{p}_i - \hat{p}_j) + \frac{1}{n}\right)</math>


Another alternative is to rely on a Bayesian estimator using [[Jeffreys prior]] which leads to using a [[dirichlet distribution]], with all parameters being equal to 0.5, as a prior. The posterior will be the calculations from above, but after adding 1/2 to each of the ''k'' elements, leading to an overall increase of the sample size by <math>\frac{k}{2}</math>. This was originally developed for a multinomial distribution with four events, and is known as ''wald+2'', for analyzing matched pairs data (see the next section for more details).<ref name=Agresti2005>{{Cite journal
Another alternative is to rely on a Bayesian estimator using [[Jeffreys prior]] which leads to using a [[dirichlet distribution]], with all parameters being equal to 0.5, as a prior. The posterior will be the calculations from above, but after adding 1/2 to each of the ''k'' elements, leading to an overall increase of the sample size by <math>\frac{k}{2}</math>. This was originally developed for a multinomial distribution with four events, and is known as ''wald+2'', for analyzing matched pairs data (see the next section for more details).<ref name=Agresti2005>{{Cite journal
Line 285: Line 272:
This leads to the following SE:
This leads to the following SE:


<math>
<math display="block">
\widehat{\operatorname{SE}(\hat{p}_i - \hat{p}_j)}_{wald+\frac{k}{2}} =  
\widehat{\operatorname{SE}}{(\hat{p}_i - \hat{p}_j)}_{wald+\frac{k}{2}} =  
\sqrt{\frac{\left(\hat{p}_i + \hat{p}_j + \frac{1}{n}\right)\frac{n}{n+\frac{k}{2}} -  
\sqrt{\frac{\left(\hat{p}_i + \hat{p}_j + \frac{1}{n}\right)\frac{n}{n+\frac{k}{2}} -  
\left(\hat{p}_i - \hat{p}_j\right)^2 \left(\frac{n}{n+\frac{k}{2}}\right)^2 }{n+\frac{k}{2}}}
\left(\hat{p}_i - \hat{p}_j\right)^2 \left(\frac{n}{n+\frac{k}{2}}\right)^2 }{n+\frac{k}{2}}}
Line 293: Line 280:


{{hidden begin|style=width:100%|ta1=center|border=1px #aaa solid|title=[Proof]}}
{{hidden begin|style=width:100%|ta1=center|border=1px #aaa solid|title=[Proof]}}
<math>
<math display="block">
\begin{align}
\begin{align}
\widehat{\operatorname{SE}(\hat{p}_i - \hat{p}_j)}_{wald+\frac{k}{2}} & = \sqrt{\frac{\left(\frac{x_i+1/2}{n+\frac{k}{2}} + \frac{x_j+1/2}{n+\frac{k}{2}}\right) - \left(\frac{x_i+1/2}{n+\frac{k}{2}} - \frac{x_j+1/2}{n+\frac{k}{2}}\right)^2}{n+\frac{k}{2}}} \\
\widehat{\operatorname{SE}}{(\hat{p}_i - \hat{p}_j)}_{wald+\frac{k}{2}} & = \sqrt{\frac{\left(\frac{x_i+1/2}{n+\frac{k}{2}} + \frac{x_j+1/2}{n+\frac{k}{2}}\right) - \left(\frac{x_i+1/2}{n+\frac{k}{2}} - \frac{x_j+1/2}{n+\frac{k}{2}}\right)^2}{n+\frac{k}{2}}} \\
& =  
& = \sqrt{\frac{\left(\frac{x_i}{n} + \frac{x_j}{n} + \frac{1}{n}\right)\frac{n}{n+\frac{k}{2}} - \left(\frac{x_i}{n} - \frac{x_j}{n}\right)^2 \left(\frac{n}{n+\frac{k}{2}}\right)^2 }{n+\frac{k}{2}}} \\
\sqrt{\frac{\left(\frac{x_i}{n} + \frac{x_j}{n} + \frac{1}{n}\right)\frac{n}{n+\frac{k}{2}} - \left(\frac{x_i}{n} - \frac{x_j}{n}\right)^2 \left(\frac{n}{n+\frac{k}{2}}\right)^2 }{n+\frac{k}{2}}} \\
& = \sqrt{\frac{\left(\hat{p}_i + \hat{p}_j + \frac{1}{n}\right)\frac{n}{n+\frac{k}{2}} - \left(\hat{p}_i - \hat{p}_j\right)^2 \left(\frac{n}{n+\frac{k}{2}}\right)^2 }{n+\frac{k}{2}}}  
& = \sqrt{\frac{\left(\hat{p}_i + \hat{p}_j + \frac{1}{n}\right)\frac{n}{n+\frac{k}{2}} - \left(\hat{p}_i - \hat{p}_j\right)^2 \left(\frac{n}{n+\frac{k}{2}}\right)^2 }{n+\frac{k}{2}}}  
\end{align}
\end{align}
Line 305: Line 291:
Which can just be plugged into the original Wald formula as follows:
Which can just be plugged into the original Wald formula as follows:


<math>(p_i - p_j)\frac{n}{n+\frac{k}{2}} \pm z_{\alpha/2} \cdot \widehat{\operatorname{SE}(\hat{p}_i - \hat{p}_j)}_{wald+\frac{k}{2}}</math>
<math display="block">\left(p_i - p_j\right) \frac{n}{n+\frac{k}{2}} \pm z_{\alpha/2} \cdot \widehat{\operatorname{SE}}{(\hat{p}_i - \hat{p}_j)}_{wald+\frac{k}{2}}</math>


== Occurrence and applications ==
== Occurrence and applications ==
Line 329: Line 315:
And the difference we want to build confidence intervals for is:
And the difference we want to build confidence intervals for is:


<math>p_{*1} - p_{1*} = \frac{F_{11} + F_{01}}{N} - \frac{F_{11} + F_{10}}{N} = \frac{F_{01}}{N} - \frac{F_{10}}{N} = p_{01} - p_{10}</math>
<math display="block">p_{*1} - p_{1*} = \frac{F_{11} + F_{01}}{N} - \frac{F_{11} + F_{10}}{N} = \frac{F_{01}}{N} - \frac{F_{10}}{N} = p_{01} - p_{10}</math>


Hence, a confidence intervals for the marginal positive proportions (<math>p_{*1} - p_{1*}</math>) is the same as building a confidence interval for the difference of the proportions from the secondary diagonal of the two-by-two contingency table (<math>p_{01} - p_{10}</math>).
Hence, a confidence intervals for the marginal positive proportions (<math>p_{*1} - p_{1*}</math>) is the same as building a confidence interval for the difference of the proportions from the secondary diagonal of the two-by-two contingency table (<math>p_{01} - p_{10}</math>).
Line 335: Line 321:
Calculating a [[p-value]] for such a difference is known as [[McNemar's test]]. Building confidence interval around it can be constructed using methods described above for [[Multinomial distribution#Confidence intervals for the difference of two proportions|Confidence intervals for the difference of two proportions]].
Calculating a [[p-value]] for such a difference is known as [[McNemar's test]]. Building confidence interval around it can be constructed using methods described above for [[Multinomial distribution#Confidence intervals for the difference of two proportions|Confidence intervals for the difference of two proportions]].


The Wald confidence intervals from the previous section can be applied to this setting, and appears in the literature using alternative notations. Specifically, the SE often presented is based on the contingency table frequencies instead of the sample proportions. For example, the Wald confidence intervals, provided above, can be written as:<ref name=pass_sample_size_software />{{rp|102–3}}
The Wald confidence intervals from the previous section can be applied to this setting, and appears in the literature using alternative notations. Specifically, the SE often presented is based on the contingency table frequencies instead of the sample proportions. For example, the Wald confidence intervals, provided above, can be written as:<ref name=pass_sample_size_software />{{rp|pp=102–103}}


<math>
<math display="block">\begin{align}
\widehat{\operatorname{SE}(p_{*1} - p_{1*})} = \widehat{\operatorname{SE}(p_{01} - p_{10})} = \frac{\sqrt{n(f_{10} + f_{01}) - (f_{10} - f_{01})^2}}{n\sqrt{n}}
\widehat{\operatorname{SE}}(p_{*1} - p_{1*}) &= \widehat{\operatorname{SE}}(p_{01} - p_{10}) \\[1ex]
</math>
&= \frac{\sqrt{n \left(f_{10} + f_{01}\right) - \left(f_{10} - f_{01}\right)^2}}{n\sqrt{n}}
\end{align}</math>


Further research in the literature has identified several shortcomings in both the Wald and the Wald with continuity correction methods, and other methods have been proposed for practical application.<ref name=pass_sample_size_software />
Further research in the literature has identified several shortcomings in both the Wald and the Wald with continuity correction methods, and other methods have been proposed for practical application.<ref name=pass_sample_size_software />
Line 354: Line 341:
  | issue = 4
  | issue = 4
  | pages = 280–288
  | pages = 280–288
| doi = 10.1080/00031305.2000.10474560
| doi = 10.1080/00031305.2000.10474560
  }}</ref>) in which each cell frequency had an extra <math>\frac{1}{2}</math> added to it.<ref name=Agresti2005/> This leads to the ''Wald+2'' confidence intervals. In a Bayesian interpretation, this is like building the estimators taking as prior a [[dirichlet distribution]] with all parameters being equal to 0.5 (which is, in fact, the [[Jeffreys prior]]). The ''+2'' in the name ''wald+2'' can now be taken to mean that in the context of a two-by-two contingency table, which is a multinomial distribution with four possible events, then since we add 1/2 an observation to each of them, then this translates to an overall addition of 2 observations (due to the prior).
  }}</ref>) in which each cell frequency had an extra <math>\frac{1}{2}</math> added to it.<ref name=Agresti2005/> This leads to the ''Wald+2'' confidence intervals. In a Bayesian interpretation, this is like building the estimators taking as prior a [[dirichlet distribution]] with all parameters being equal to 0.5 (which is, in fact, the [[Jeffreys prior]]). The ''+2'' in the name ''wald+2'' can now be taken to mean that in the context of a two-by-two contingency table, which is a multinomial distribution with four possible events, then since we add 1/2 an observation to each of them, then this translates to an overall addition of 2 observations (due to the prior).


This leads to the following modified SE for the case of matched pairs data:
This leads to the following modified SE for the case of matched pairs data:


<math>
<math display="block">
\widehat{\operatorname{SE}(p_{*1} - p_{1*})} = \frac{\sqrt{(n+2)(f_{10} + f_{01} + 1) - (f_{10} - f_{01})^2}}{(n+2)\sqrt{(n+2)}}
\widehat{\operatorname{SE}}(p_{*1} - p_{1*})  = \frac{\sqrt{\left(n+2\right) \left(f_{10} + f_{01} + 1\right) - \left(f_{10} - f_{01}\right)^2}}{\left(n+2\right) \sqrt{n+2}}
</math>
</math>


Which can just be plugged into the original Wald formula as follows:
Which can just be plugged into the original Wald formula as follows:


<math>(p_{*1} - p_{1*})\frac{n}{n+2} \pm z_{\alpha/2} \cdot \widehat{\operatorname{SE}(\hat{p}_i - \hat{p}_j)}_{wald+2}</math>
<math display="block">\left(p_{*1} - p_{1*}\right) \frac{n}{n+2} \pm z_{\alpha/2} \cdot \widehat{\operatorname{SE}}(\hat{p}_i - \hat{p}_j)_{wald+2}</math>


Other modifications include ''Bonett and Price’s Adjusted Wald'', and ''Newcombe’s Score''.
Other modifications include ''Bonett and Price’s Adjusted Wald'', and ''Newcombe’s Score''.
Line 375: Line 362:
First, reorder the parameters <math>p_1, \ldots, p_k</math> such that they are sorted in descending order (this is only to speed up computation and not strictly necessary). Now, for each trial, draw an auxiliary variable ''X'' from a uniform (0,&nbsp;1) distribution. The resulting outcome is the component
First, reorder the parameters <math>p_1, \ldots, p_k</math> such that they are sorted in descending order (this is only to speed up computation and not strictly necessary). Now, for each trial, draw an auxiliary variable ''X'' from a uniform (0,&nbsp;1) distribution. The resulting outcome is the component


: <math>j = \min \left\{ j' \in \{1,\dots,k\}\colon \left(\sum_{i=1}^{j'} p_i\right) - X \geq 0 \right\}.</math>
<math display="block">j = \min \left\{ j' \in \{1,\dots,k\}\colon \left(\sum_{i=1}^{j'} p_i\right) - X \geq 0 \right\}.</math>


{''X''<sub>''j''</sub> = 1, ''X''<sub>''k''</sub> = 0 for ''k''&nbsp;≠&nbsp;''j'' } is one observation from the multinomial distribution with <math>p_1, \ldots, p_k</math> and ''n''&nbsp;=&nbsp;1.  A sum of independent repetitions of this experiment is an observation from a multinomial distribution with ''n'' equal to the number of such repetitions.
{''X''<sub>''j''</sub> = 1, ''X''<sub>''k''</sub> = 0 for ''k''&nbsp;≠&nbsp;''j'' } is one observation from the multinomial distribution with <math>p_1, \ldots, p_k</math> and ''n''&nbsp;=&nbsp;1.  A sum of independent repetitions of this experiment is an observation from a multinomial distribution with ''n'' equal to the number of such repetitions.


=== Sampling using repeated conditional binomial samples ===
=== Sampling using repeated conditional binomial samples ===
Given the parameters <math>p_1, p_2, \ldots, p_k</math> and a total for the sample <math>n</math> such that <math>\sum_{i=1}^k X_i = n </math>, it is possible to sample sequentially for the number in an arbitrary state <math>X_i </math>, by partitioning the state space into <math>i </math> and not-<math>i </math>, conditioned on any prior samples already taken, repeatedly.
Given the parameters <math>p_1, p_2, \ldots, p_k</math> and a total for the sample <math>n</math> such that <math display="inline">\sum_{i=1}^k X_i = n </math>, it is possible to sample sequentially for the number in an arbitrary state <math>X_i </math>, by partitioning the state space into <math>i </math> and not-<math>i </math>, conditioned on any prior samples already taken, repeatedly.


==== Algorithm: Sequential conditional binomial sampling ====
==== Algorithm: Sequential conditional binomial sampling ====
Line 394: Line 381:
     rho = rho - p[i]
     rho = rho - p[i]
X[k] = S
X[k] = S
</syntaxhighlight>Heuristically, each application of the binomial sample reduces the available number to sample from and the conditional probabilities are likewise updated to ensure logical consistency.<ref>{{Cite web |date=2020-05-05 |title=11.5: The Multinomial Distribution |url=https://stats.libretexts.org/Bookshelves/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/11%3A_Bernoulli_Trials/11.05%3A_The_Multinomial_Distribution |access-date=2023-09-13 |website=Statistics LibreTexts |language=en}}</ref>
</syntaxhighlight>
Heuristically, each application of the binomial sample reduces the available number to sample from and the conditional probabilities are likewise updated to ensure logical consistency.<ref>{{Cite web |date=2020-05-05 |title=11.5: The Multinomial Distribution |url = https://stats.libretexts.org/Bookshelves/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/11%3A_Bernoulli_Trials/11.05%3A_The_Multinomial_Distribution |access-date=2023-09-13 |website=Statistics LibreTexts |language=en}}</ref>


== Software implementations ==
== Software implementations ==
* The ''MultinomialCI'' R package allows the computation of simultaneous confidence intervals for the probabilities of a multinomial distribution given a set of observations.<ref>{{Cite web
* The ''MultinomialCI'' R package allows the computation of simultaneous confidence intervals for the probabilities of a multinomial distribution given a set of observations.<ref>{{Cite web
  | url = https://CRAN.R-project.org/package=MultinomialCI
| url = https://CRAN.R-project.org/package=MultinomialCI
  | title = MultinomialCI - Confidence Intervals for Multinomial Proportions
| title = MultinomialCI - Confidence Intervals for Multinomial Proportions
  | date = 11 May 2021
| date = 11 May 2021
  | publisher = CRAN
  | publisher = CRAN
  | access-date = 2024-03-23
| access-date = 2024-03-23
}}</ref>
}}</ref>



Latest revision as of 06:36, 24 December 2025

Template:Short description Template:Probability distribution In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a k-sided die rolled n times. For n independent trials each of which leads to a success for exactly one of k categories, with each category having a given fixed success probability, the multinomial distribution gives the probability of any particular combination of numbers of successes for the various categories.

When k is 2 and n is 1, the multinomial distribution is the Bernoulli distribution. When k is 2 and n is bigger than 1, it is the binomial distribution. When k is bigger than 2 and n is 1, it is the categorical distribution. The term "multinoulli" is sometimes used for the categorical distribution to emphasize this four-way relationship (so n determines the suffix, and k the prefix).

The Bernoulli distribution models the outcome of a single Bernoulli trial. In other words, it models whether flipping a (possibly biased) coin one time will result in either a success (obtaining a head) or failure (obtaining a tail). The binomial distribution generalizes this to the number of heads from performing n independent flips (Bernoulli trials) of the same coin. The multinomial distribution models the outcome of n experiments, where the outcome of each trial has a categorical distribution, such as rolling a (possibly biased) k-sided die n times.

Let k be a fixed finite number. Mathematically, we have k possible mutually exclusive outcomes, with corresponding probabilities p1, ..., pk, and n independent trials. Since the k outcomes are mutually exclusive and one must occur we have pi ≥ 0 for i = 1, ..., k and i=1kpi=1. Then if the random variables Xi indicate the number of times outcome number i is observed over the n trials, the vector X = (X1, ..., Xk) follows a multinomial distribution with parameters n and p, where p = (p1, ..., pk). While the trials are independent, their outcomes Xi are dependent because they must sum to n.

Definitions

Probability mass function

Suppose one does an experiment of extracting n balls of k different colors from a bag, replacing the extracted balls after each draw. Balls of the same color are equivalent. Denote the variable which is the number of extracted balls of color i (i = 1, ..., k) as Xi, and denote as pi the probability that a given extraction will be in color i. The probability mass function of this multinomial distribution is:

f(x1,,xk;n,p1,,pk)=Pr(X1=x1 and  and Xk=xk)={n!x1!xk!p1x1××pkxk,when i=1kxi=n0otherwise,

for non-negative integers x1, ..., xk.

The probability mass function can be expressed using the gamma function as:

f(x1,,xk;p1,,pk)=Γ(ixi+1)iΓ(xi+1)i=1kpixi.

This form shows its resemblance to the Dirichlet distribution, which is its conjugate prior.

Example

Suppose that in a three-way election for a large country, candidate A received 20% of the votes, candidate B received 30% of the votes, and candidate C received 50% of the votes. If six voters are selected randomly, what is the probability that there will be exactly one supporter for candidate A, two supporters for candidate B and three supporters for candidate C in the sample?

Note: Since we’re assuming that the voting population is large, it is reasonable and permissible to think of the probabilities as unchanging once a voter is selected for the sample. Technically speaking this is sampling without replacement, so the correct distribution is the multivariate hypergeometric distribution, but the distributions converge as the population grows large in comparison to a fixed sample size.[1]

Pr(A=1,B=2,C=3)=6!1!2!3!(0.21)(0.32)(0.53)=0.135

Properties

Normalization

The multinomial distribution is normalized according to:

j=1kxj=nf(x1,,xk;n,p1,,pk)=1

where the sum is over all permutations of xj such that j=1kxj=n.

Expected value and variance

The expected number of times the outcome i was observed over n trials is

E(Xi)=npi.

The covariance matrix is as follows. Each diagonal entry is the variance of a binomially distributed random variable, and is therefore

Var(Xi)=npi(1pi).

The off-diagonal entries are the covariances:

Cov(Xi,Xj)=npipj

for i, j distinct.

All covariances are negative because for fixed n, an increase in one component of a multinomial vector requires a decrease in another component.

When these expressions are combined into a matrix with i, j element cov(Xi,Xj), the result is a k × k positive-semidefinite covariance matrix of rank k − 1. In the special case where k = n and where the pi are all equal, the covariance matrix is the centering matrix.

The entries of the corresponding correlation matrix are

ρ(Xi,Xi)=1ρ(Xi,Xj)=Cov(Xi,Xj)Var(Xi)Var(Xj)=pipjpi(1pi)pj(1pj)=pipj(1pi)(1pj).

Note that the number of trials n drops out of this expression.

Each of the k components separately has a binomial distribution with parameters n and pi, for the appropriate value of the subscript i.

The support of the multinomial distribution is the set

{(n1,,nk)kn1++nk=n}.

Its number of elements is

(n+k1k1).

Matrix notation

In matrix notation, E(𝐗)=n𝐩,

and Var(𝐗)=n{diag(𝐩)𝐩𝐩T},

with pTScript error: No such module "Check for unknown parameters". = the row vector transpose of the column vector pScript error: No such module "Check for unknown parameters"..

Visualization

As slices of generalized Pascal's triangle

Just like one can interpret the binomial distribution as (normalized) one-dimensional (1D) slices of Pascal's triangle, so too can one interpret the multinomial distribution as 2D (triangular) slices of Pascal's pyramid, or 3D/4D/+ (pyramid-shaped) slices of higher-dimensional analogs of Pascal's triangle. This reveals an interpretation of the range of the distribution: discretized equilateral "pyramids" in arbitrary dimension—i.e. a simplex with a grid.Script error: No such module "Unsubst".

As polynomial coefficients

Similarly, just like one can interpret the binomial distribution as the polynomial coefficients of (p+q)n when expanded, one can interpret the multinomial distribution as the coefficients of (p1+p2+p3++pk)n when expanded, noting that just the coefficients must sum up to 1.

Large deviation theory

Script error: No such module "Labelled list hatnote".

Asymptotics

By Stirling's formula, at the limit of n,x1,,xk, we haveln(nx1,,xk)+i=1kxilnpi=nDKL(p^p)k12ln(2πn)12i=1kln(p^i)+o(1)where relative frequencies p^i=xi/n in the data can be interpreted as probabilities from the empirical distribution p^, and DKL is the Kullback–Leibler divergence.

This formula can be interpreted as follows.

Consider Δk, the space of all possible distributions over the categories {1,2,,k}. It is a simplex. After n independent samples from the categorical distribution p (which is how we construct the multinomial distribution), we obtain an empirical distribution p^.

By the asymptotic formula, the probability that empirical distribution p^ deviates from the actual distribution p decays exponentially as we sample more data, at a rate of DKL(p^p). The more experiments and the more different p^ is from p, the less likely it is to see such an empirical distribution.

If A is a closed subset of Δk, then by dividing up A into pieces, and reasoning about the growth rate of Pr(p^Aϵ) on each piece Aϵ, we obtain Sanov's theorem, which states that limn1nlnPr(p^A)=infp^ADKL(p^p)

Concentration at large n

Due to the exponential decay, at large n, almost all the probability mass is concentrated in a small neighborhood of p. In this small neighborhood, we can take the first nonzero term in the Taylor expansion of DKL, to obtainln(nx1,,xk)p1x1pkxkn2i=1k(p^ipi)2pi=12i=1k(xinpi)2npi This resembles the Gaussian distribution, which suggests the following theorem:

Theorem. At the n limit, ni=1k(p^ipi)2pi=i=1k(xinpi)2npi converges in distribution to the chi-squared distribution χ2(k1).

File:Convergence of multinomial distribution to the gaussian distribution.webm
If we sample from the multinomial distribution Multinomial(n;0.2,0.3,0.5), and plot the heatmap of the samples within the 2-dimensional simplex (here shown as a black triangle), we notice that as n, the distribution converges to a Gaussian around the point (0.2,0.3,0.5), with the contours converging in shape to ellipses, with radii converging as 1/n. Meanwhile, the separation between the discrete points converge as 1/n, and so the discrete multinomial distribution converges to a continuous Gaussian distribution.

<templatestyles src="Template:Hidden begin/styles.css"/>

[Proof]

The space of all distributions over categories {1,2,,k} is a simplex: Δk={(y1,,yk):y1,,yk0,iyi=1}, and the set of all possible empirical distributions after n experiments is a subset of the simplex: Δk,n={(x1/n,,xk/n):x1,,xk,ixi=n}. That is, it is the intersection between Δk and the lattice (k)/n.

As n increases, most of the probability mass is concentrated in a subset of Δk,n near p, and the probability distribution near p becomes well-approximated by (nx1,,xk)p1x1pkxken2i(p^ipi)2piFrom this, we see that the subset upon which the mass is concentrated has radius on the order of 1/n, but the points in the subset are separated by distance on the order of 1/n, so at large n, the points merge into a continuum. To convert this from a discrete probability distribution to a continuous probability density, we need to multiply by the volume occupied by each point of Δk,n in Δk. However, by symmetry, every point occupies exactly the same volume (except a negligible set on the boundary), so we obtain a probability density ρ(p^)=Cen2i(p^ipi)2pi, where C is a constant.

Finally, since the simplex Δk is not all of k, but only within a (k1)-dimensional plane, we obtain the desired result.

Conditional concentration at large n

The above concentration phenomenon can be easily generalized to the case where we condition upon independent constraints. This is the theoretical justification for Pearson's chi-squared test.

Theorem.

  • Given functions f1,,f, such that they are continuously differentiable in a neighborhood of p, and the vectors (1,1,,1),f1(p),,f(p) are linearly independent;
  • given sequences ϵ1(n),,ϵ(n), such that asymptotically 1nϵi(n)1n for each i{1,,};
  • then for the multinomial distribution conditional on constraints f1(p^)[f1(p)ϵ1(n),f1(p)+ϵ1(n)],,f(p^)[f(p)ϵ(n),f(p)+ϵ(n)], we have the quantity ni(p^ipi)2pi=i(xinpi)2npi converging in distribution to χ2(k1) at the n limit.

In the case that all p^i are equal, this reduces to the concentration of entropies around the maximum entropy.[2][3]

This theorem can be shown by starting with the previous case, then taking the conditional on the constraints.

Related distributions

In some fields such as natural language processing, categorical and multinomial distributions are synonymous and it is common to speak of a multinomial distribution when a categorical distribution is actually meant. This stems from the fact that it is sometimes convenient to express the outcome of a categorical distribution as a "1-of-k" vector (a vector with one element containing a 1 and all other elements containing a 0) rather than as an integer in the range 1k; in this form, a categorical distribution is equivalent to a multinomial distribution over a single trial.

Statistical inference

Script error: No such module "Unsubst".

Equivalence tests for multinomial distributions

The goal of equivalence testing is to establish the agreement between a theoretical multinomial distribution and observed counting frequencies. The theoretical distribution may be a fully specified multinomial distribution or a parametric family of multinomial distributions.

Let q denote a theoretical multinomial distribution and let p be a true underlying distribution. The distributions p and q are considered equivalent if d(p,q)<ε for a distance d and a tolerance parameter ε>0. The equivalence test problem is H0={d(p,q)ε} versus H1={d(p,q)<ε}. The true underlying distribution p is unknown. Instead, the counting frequencies pn are observed, where n is a sample size. An equivalence test uses pn to reject H0. If H0 can be rejected then the equivalence between p and q is shown at a given significance level. The equivalence test for Euclidean distance can be found in text book of Wellek (2010).[4] The equivalence test for the total variation distance is developed in Ostrovski (2017).[5] The exact equivalence test for the specific cumulative distance is proposed in Frey (2009).[6]

The distance between the true underlying distribution p and a family of the multinomial distributions is defined by d(p,)=minhd(p,h). Then the equivalence test problem is given by H0={d(p,)ε} and H1={d(p,)<ε}. The distance d(p,) is usually computed using numerical optimization. The tests for this case are developed recently in Ostrovski (2018).[7]

Confidence intervals for the difference of two proportions

In the setting of a multinomial distribution, constructing confidence intervals for the difference between the proportions of observations from two events, pipj, requires the incorporation of the negative covariance between the sample estimators p^i=Xin and p^j=Xjn.

Some of the literature on the subject focused on the use-case of matched-pairs binary data, which requires careful attention when translating the formulas to the general case of pipj for any multinomial distribution. Formulas in the current section will be generalized, while formulas in the next section will focus on the matched-pairs binary data use-case.

Wald's standard error (SE) of the difference of proportion can be estimated using:[8]Template:Rp[9]

SE^(p^ip^j)=(p^i+p^j)(p^ip^j)2n

For a 100(1α)% approximate confidence interval, the margin of error may incorporate the appropriate quantile from the standard normal distribution, as follows:

(p^ip^j)±zα/2SE^(p^ip^j)

<templatestyles src="Template:Hidden begin/styles.css"/>

[Proof]

As the sample size (n) increases, the sample proportions will approximately follow a multivariate normal distribution, thanks to the multidimensional central limit theorem (and it could also be shown using the Cramér–Wold theorem). Therefore, their difference will also be approximately normal. Also, these estimators are weakly consistent and plugging them into the SE estimator makes it also weakly consistent. Hence, thanks to Slutsky's theorem, the pivotal quantity (p^ip^j)(pipj)SE(p^ip^j)^ approximately follows the standard normal distribution. And from that, the above approximate confidence interval is directly derived.

The SE can be constructed using the calculus of the variance of the difference of two random variables: SE^(p^ip^j)=p^i(1p^i)n+p^j(1p^j)n2(p^ip^jn)=1n(p^i+p^jp^i2p^j2+2p^ip^j)=(p^i+p^j)(p^ip^j)2n

A modification which includes a continuity correction adds 1n to the margin of error as follows:[10]Template:Rp

(p^ip^j)±(zα/2SE^(p^ip^j)+1n)

Another alternative is to rely on a Bayesian estimator using Jeffreys prior which leads to using a dirichlet distribution, with all parameters being equal to 0.5, as a prior. The posterior will be the calculations from above, but after adding 1/2 to each of the k elements, leading to an overall increase of the sample size by k2. This was originally developed for a multinomial distribution with four events, and is known as wald+2, for analyzing matched pairs data (see the next section for more details).[11]

This leads to the following SE:

SE^(p^ip^j)wald+k2=(p^i+p^j+1n)nn+k2(p^ip^j)2(nn+k2)2n+k2

<templatestyles src="Template:Hidden begin/styles.css"/>

[Proof]

SE^(p^ip^j)wald+k2=(xi+1/2n+k2+xj+1/2n+k2)(xi+1/2n+k2xj+1/2n+k2)2n+k2=(xin+xjn+1n)nn+k2(xinxjn)2(nn+k2)2n+k2=(p^i+p^j+1n)nn+k2(p^ip^j)2(nn+k2)2n+k2

Which can just be plugged into the original Wald formula as follows:

(pipj)nn+k2±zα/2SE^(p^ip^j)wald+k2

Occurrence and applications

Confidence intervals for the difference in matched-pairs binary data (using multinomial with k=4)

For the case of matched-pairs binary data, a common task is to build the confidence interval of the difference of the proportion of the matched events. For example, we might have a test for some disease, and we may want to check the results of it for some population at two points in time (1 and 2), to check if there was a change in the proportion of the positives for the disease during that time.

Such scenarios can be represented using a two-by-two contingency table with the number of elements that had each of the combination of events. We can use small f for sampling frequencies: f11,f10,f01,f00, and capital F for population frequencies: F11,F10,F01,F00. These four combinations could be modeled as coming from a multinomial distribution (with four potential outcomes). The sizes of the sample and population can be n and N respectively. And in such a case, there is an interest in building a confidence interval for the difference of proportions from the marginals of the following (sampled) contingency table:

Test 2 positive Test 2 negative Row total
Test 1 positive f11 f10 f1*=f11+f10
Test 1 negative f01 f00 f0*=f01+f00
Column total f*1=f11+f01 f*0=f10+f00 n

In this case, checking the difference in marginal proportions means we are interested in using the following definitions: p1*=F1*N=F11+F10N, p*1=F*1N=F11+F01N. And the difference we want to build confidence intervals for is:

p*1p1*=F11+F01NF11+F10N=F01NF10N=p01p10

Hence, a confidence intervals for the marginal positive proportions (p*1p1*) is the same as building a confidence interval for the difference of the proportions from the secondary diagonal of the two-by-two contingency table (p01p10).

Calculating a p-value for such a difference is known as McNemar's test. Building confidence interval around it can be constructed using methods described above for Confidence intervals for the difference of two proportions.

The Wald confidence intervals from the previous section can be applied to this setting, and appears in the literature using alternative notations. Specifically, the SE often presented is based on the contingency table frequencies instead of the sample proportions. For example, the Wald confidence intervals, provided above, can be written as:[10]Template:Rp

SE^(p*1p1*)=SE^(p01p10)=n(f10+f01)(f10f01)2nn

Further research in the literature has identified several shortcomings in both the Wald and the Wald with continuity correction methods, and other methods have been proposed for practical application.[10]

One such modification includes Agresti and Min’s Wald+2 (similar to some of their other works[12]) in which each cell frequency had an extra 12 added to it.[11] This leads to the Wald+2 confidence intervals. In a Bayesian interpretation, this is like building the estimators taking as prior a dirichlet distribution with all parameters being equal to 0.5 (which is, in fact, the Jeffreys prior). The +2 in the name wald+2 can now be taken to mean that in the context of a two-by-two contingency table, which is a multinomial distribution with four possible events, then since we add 1/2 an observation to each of them, then this translates to an overall addition of 2 observations (due to the prior).

This leads to the following modified SE for the case of matched pairs data:

SE^(p*1p1*)=(n+2)(f10+f01+1)(f10f01)2(n+2)n+2

Which can just be plugged into the original Wald formula as follows:

(p*1p1*)nn+2±zα/2SE^(p^ip^j)wald+2

Other modifications include Bonett and Price’s Adjusted Wald, and Newcombe’s Score.

Computational methods

Random variate generation

Script error: No such module "labelled list hatnote".

First, reorder the parameters p1,,pk such that they are sorted in descending order (this is only to speed up computation and not strictly necessary). Now, for each trial, draw an auxiliary variable X from a uniform (0, 1) distribution. The resulting outcome is the component

j=min{j{1,,k}:(i=1jpi)X0}.

{Xj = 1, Xk = 0 for k ≠ j } is one observation from the multinomial distribution with p1,,pk and n = 1. A sum of independent repetitions of this experiment is an observation from a multinomial distribution with n equal to the number of such repetitions.

Sampling using repeated conditional binomial samples

Given the parameters p1,p2,,pk and a total for the sample n such that i=1kXi=n, it is possible to sample sequentially for the number in an arbitrary state Xi, by partitioning the state space into i and not-i, conditioned on any prior samples already taken, repeatedly.

Algorithm: Sequential conditional binomial sampling

S = n
rho = 1
for i in [1,k-1]:
    if rho != 0:
        X[i] ~ Binom(S,p[i]/rho)
    else 
        X[i] = 0
    S = S - X[i]
    rho = rho - p[i]
X[k] = S

Heuristically, each application of the binomial sample reduces the available number to sample from and the conditional probabilities are likewise updated to ensure logical consistency.[13]

Software implementations

  • The MultinomialCI R package allows the computation of simultaneous confidence intervals for the probabilities of a multinomial distribution given a set of observations.[14]

See also

References

<templatestyles src="Reflist/styles.css" />

  1. Script error: No such module "citation/CS1".
  2. Script error: No such module "citation/CS1".
  3. Script error: No such module "citation/CS1".
  4. Script error: No such module "citation/CS1".
  5. Script error: No such module "Citation/CS1".Official web link (subscription required). Alternate, free web link.
  6. Script error: No such module "Citation/CS1".Official web link (subscription required).
  7. Script error: No such module "Citation/CS1".Official web link (subscription required). Alternate, free web link.
  8. Script error: No such module "citation/CS1".
  9. Script error: No such module "Citation/CS1".
  10. a b c Script error: No such module "citation/CS1".
  11. a b Script error: No such module "Citation/CS1".
  12. Script error: No such module "Citation/CS1".
  13. Script error: No such module "citation/CS1".
  14. Script error: No such module "citation/CS1".

Script error: No such module "Check for unknown parameters".

Further reading

<templatestyles src="Refbegin/styles.css" />

  • Script error: No such module "citation/CS1".
  • Script error: No such module "citation/CS1".

Template:ProbDistributions Template:Authority control