Dirichlet distribution

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Template:Short description Template:Probability distribution

In probability and statistics, the Dirichlet distribution (after Peter Gustav Lejeune Dirichlet), often denoted Dir(α), is a family of continuous multivariate probability distributions parameterized by a vector Template:Math of positive reals. It is a multivariate generalization of the beta distribution,[1] hence its alternative name of multivariate beta distribution (MBD).[2] Dirichlet distributions are commonly used as prior distributions in Bayesian statistics, and in fact, the Dirichlet distribution is the conjugate prior of the categorical distribution and multinomial distribution.

The infinite-dimensional generalization of the Dirichlet distribution is the Dirichlet process.

Definitions

Probability density function

File:LogDirichletDensity-alpha 0.3 to alpha 2.0.gif
Illustrating how the log of the density function changes when Template:Math as we change the vector Template:Math from Template:Math to Template:Math, keeping all the individual αi's equal to each other.

The Dirichlet distribution of order Template:Math with parameters Template:Math has a probability density function with respect to Lebesgue measure on the Euclidean space Template:Math given by

f(x1,,xK;α1,,αK)=1B(α)i=1Kxiαi1 where {xk}k=1k=K belong to the standard K1 simplex, or in other words: i=1Kxi=1 and xi[0,1] for all i{1,,K}.

The normalizing constant is the multivariate beta function, which can be expressed in terms of the gamma function:

B(α)=i=1KΓ(αi)Γ(i=1Kαi),α=(α1,,αK).

Support

The support of the Dirichlet distribution is the set of Template:Mvar-dimensional vectors Template:Math whose entries are real numbers in the interval [0,1] such that x1=1, i.e. the sum of the coordinates is equal to 1. These can be viewed as the probabilities of a Template:Mvar-way categorical event. Another way to express this is that the domain of the Dirichlet distribution is itself a set of probability distributions, specifically the set of Template:Mvar-dimensional discrete distributions. The technical term for the set of points in the support of a Template:Mvar-dimensional Dirichlet distribution is the open [[standard simplex|standard Template:Math-simplex]],[3] which is a generalization of a triangle, embedded in the next-higher dimension. For example, with Template:Math, the support is an equilateral triangle embedded in a downward-angle fashion in three-dimensional space, with vertices at (1,0,0), (0,1,0) and (0,0,1), i.e. touching each of the coordinate axes at a point 1 unit away from the origin.

Special cases

A common special case is the symmetric Dirichlet distribution, where all of the elements making up the parameter vector Template:Math have the same value. The symmetric case might be useful, for example, when a Dirichlet prior over components is called for, but there is no prior knowledge favoring one component over another. Since all elements of the parameter vector have the same value, the symmetric Dirichlet distribution can be parametrized by a single scalar value Template:Mvar, called the concentration parameter. In terms of Template:Mvar, the density function has the form

f(x1,,xK;α)=Γ(αK)Γ(α)Ki=1Kxiα1.

When Template:Math,[1] the symmetric Dirichlet distribution is equivalent to a uniform distribution over the open [[standard simplex|standard Template:Math-simplex]], i.e. it is uniform over all points in its support. This particular distribution is known as the flat Dirichlet distribution. Values of the concentration parameter above 1 prefer variates that are dense, evenly distributed distributions, i.e. all the values within a single sample are similar to each other. Values of the concentration parameter below 1 prefer sparse distributions, i.e. most of the values within a single sample will be close to 0, and the vast majority of the mass will be concentrated in a few of the values.

When Template:Math, the distribution is the same as would be obtained by choosing a point uniformly at random from the surface of a Template:Math-dimensional unit hypersphere and squaring each coordinate. The Template:Math distribution is the Jeffreys prior for the Dirichlet distribution.

More generally, the parameter vector is sometimes written as the product αn of a (scalar) concentration parameter Template:Mvar and a (vector) base measure n=(n1,,nK) where Template:Math lies within the Template:Math-simplex (i.e.: its coordinates ni sum to one). The concentration parameter in this case is larger by a factor of Template:Mvar than the concentration parameter for a symmetric Dirichlet distribution described above. This construction ties in with concept of a base measure when discussing Dirichlet processes and is often used in the topic modelling literature.

<templatestyles src="Citation/styles.css"/>^ If we define the concentration parameter as the sum of the Dirichlet parameters for each dimension, the Dirichlet distribution with concentration parameter Template:Mvar, the dimension of the distribution, is the uniform distribution on the Template:Math-simplex.

Properties

Moments

Let X=(X1,,XK)Dir(α).

Let

α0=i=1Kαi.

Then[4][5]

E[Xi]=αiα0, Var[Xi]=αi(α0αi)α02(α0+1).

Furthermore, if ij

Cov[Xi,Xj]=αiαjα02(α0+1).

The covariance matrix is singular.

More generally, moments of Dirichlet-distributed random variables can be expressed in the following way. For t=(t1,,tK)K, denote by ti=(t1i,,tKi) its Template:Mvar-th Hadamard power. Then,[6]

E[(tX)n]=n!Γ(α0)Γ(α0+n)t1k1tKkKk1!kK!i=1KΓ(αi+ki)Γ(αi)=n!Γ(α0)Γ(α0+n)Zn(t1α,,tnα),

where the sum is over non-negative integers k1,,kK with n=k1++kK, and Zn is the cycle index polynomial of the Symmetric group of degree Template:Mvar.

We have the special case E[tX]=tαα0.

The multivariate analogue E[(t1X)n1(tqX)nq] for vectors t1,,tqK can be expressed[7] in terms of a color pattern of the exponents n1,,nq in the sense of Pólya enumeration theorem.

Particular cases include the simple computation[8]

E[i=1KXiβi]=B(α+β)B(α)=Γ(i=1Kαi)Γ[i=1K(αi+βi)]×i=1KΓ(αi+βi)Γ(αi).

Mode

The mode of the distribution is[9] the vector Template:Math with

xi=αi1α0K,αi>1.

Marginal distributions

The marginal distributions are beta distributions:[10]

XiBeta(αi,α0αi).

Also see Template:Slink below.

Conjugate to categorical or multinomial

The Dirichlet distribution is the conjugate prior distribution of the categorical distribution (a generic discrete probability distribution with a given number of possible outcomes) and multinomial distribution (the distribution over observed counts of each possible category in a set of categorically distributed observations). This means that if a data point has either a categorical or multinomial distribution, and the prior distribution of the distribution's parameter (the vector of probabilities that generates the data point) is distributed as a Dirichlet, then the posterior distribution of the parameter is also a Dirichlet. Intuitively, in such a case, starting from what we know about the parameter prior to observing the data point, we then can update our knowledge based on the data point and end up with a new distribution of the same form as the old one. This means that we can successively update our knowledge of a parameter by incorporating new observations one at a time, without running into mathematical difficulties.

Formally, this can be expressed as follows. Given a model

α=(α1,,αK)=concentration hyperparameter𝐩α=(p1,,pK)Dir(K,α)𝕏𝐩=(𝐱1,,𝐱K)Cat(K,𝐩)

then the following holds:

𝐜=(c1,,cK)=number of occurrences of category i𝐩𝕏,αDir(K,𝐜+α)=Dir(K,c1+α1,,cK+αK)

This relationship is used in Bayesian statistics to estimate the underlying parameter Template:Math of a categorical distribution given a collection of Template:Mvar samples. Intuitively, we can view the hyperprior vector Template:Math as pseudocounts, i.e. as representing the number of observations in each category that we have already seen. Then we simply add in the counts for all the new observations (the vector Template:Math) in order to derive the posterior distribution.

In Bayesian mixture models and other hierarchical Bayesian models with mixture components, Dirichlet distributions are commonly used as the prior distributions for the categorical variables appearing in the models. See the section on applications below for more information.

Relation to Dirichlet-multinomial distribution

In a model where a Dirichlet prior distribution is placed over a set of categorical-valued observations, the marginal joint distribution of the observations (i.e. the joint distribution of the observations, with the prior parameter marginalized out) is a Dirichlet-multinomial distribution. This distribution plays an important role in hierarchical Bayesian models, because when doing inference over such models using methods such as Gibbs sampling or variational Bayes, Dirichlet prior distributions are often marginalized out. See the article on this distribution for more details.

Entropy

If Template:Mvar is a Dir(α) random variable, the differential entropy of Template:Mvar (in nat units) is[11]

h(X)=E[lnf(X)]=lnB(α)+(α0K)ψ(α0)j=1K(αj1)ψ(αj)

where ψ is the digamma function.

The following formula for E[ln(Xi)] can be used to derive the differential entropy above. Since the functions ln(Xi) are the sufficient statistics of the Dirichlet distribution, the exponential family differential identities can be used to get an analytic expression for the expectation of ln(Xi) (see equation (2.62) in [12]) and its associated covariance matrix:

E[ln(Xi)]=ψ(αi)ψ(α0)

and

Cov[ln(Xi),ln(Xj)]=ψ(αi)δijψ(α0)

where ψ is the digamma function, ψ is the trigamma function, and δij is the Kronecker delta.

The spectrum of Rényi information for values other than λ=1 is given by[13]

FR(λ)=(1λ)1(λlogB(α)+i=1KlogΓ(λ(αi1)+1)logΓ(λ(α0K)+K))

and the information entropy is the limit as λ goes to 1.

Another related interesting measure is the entropy of a discrete categorical (one-of-K binary) vector Template:Math with probability-mass distribution Template:Math, i.e., P(Zi=1,Zji=0|X)=Xi. The conditional information entropy of Template:Math, given Template:Math is

S(X)=H(Z|X)=EZ[logP(Z|X)]=i=1KXilogXi

This function of Template:Math is a scalar random variable. If Template:Math has a symmetric Dirichlet distribution with all αi=α, the expected value of the entropy (in nat units) is[14]

E[S(X)]=i=1KE[XilnXi]=ψ(Kα+1)ψ(α+1)

Kullback–Leibler divergence

The Kullback–Leibler (KL) divergence between two Dirichlet distributions, Dir(α) and Dir(β), over the same simplex is:[15]

DKL(Dir(α)Dir(β))=logΓ(i=1Kαi)Γ(i=1Kβi)+i=1K[logΓ(βi)Γ(αi)+(αiβi)(ψ(αi)ψ(j=1Kαj))]

Aggregation

If

X=(X1,,XK)Dir(α1,,αK)

then, if the random variables with subscripts Template:Mvar and Template:Mvar are dropped from the vector and replaced by their sum,

X=(X1,,Xi+Xj,,XK)Dir(α1,,αi+αj,,αK).

This aggregation property may be used to derive the marginal distribution of Xi mentioned above.

Neutrality

Script error: No such module "Labelled list hatnote".

If X=(X1,,XK)Dir(α), then the vector Template:Mvar is said to be neutral[16] in the sense that XK is independent of X(K)[3] where

X(K)=(X11XK,X21XK,,XK11XK),

and similarly for removing any of X2,,XK1. Observe that any permutation of Template:Mvar is also neutral (a property not possessed by samples drawn from a generalized Dirichlet distribution).[17]

Combining this with the property of aggregation it follows that Template:Math is independent of (X1X1++Xj1,X2X1++Xj1,,Xj1X1++Xj1). In fact it is true, further, for the Dirichlet distribution, that for 3jK1, the pair (X1++Xj1,Xj++XK), and the two vectors (X1X1++Xj1,X2X1++Xj1,,Xj1X1++Xj1) and (XjXj++XK,Xj+1Xj++XK,,XKXj++XK), viewed as triple of normalised random vectors, are mutually independent. The analogous result is true for partition of the indices Template:Math into any other pair of non-singleton subsets.

Characteristic function

The characteristic function of the Dirichlet distribution is a confluent form of the Lauricella hypergeometric series. It is given by Phillips as[18]

CF(s1,,sK1)=E(ei(s1X1++sK1XK1))=Ψ[K1](α1,,αK1;α0;is1,,isK1)

where

Ψ[m](a1,,am;c;z1,zm)=(a1)k1(am)kmz1k1zmkm(c)kk1!km!.

The sum is over non-negative integers k1,,km and k=k1++km. Phillips goes on to state that this form is "inconvenient for numerical calculation" and gives an alternative in terms of a complex path integral:

Ψ[m]=Γ(c)2πiLetta1++amcj=1m(tzj)ajdt

where Template:Mvar denotes any path in the complex plane originating at , encircling in the positive direction all the singularities of the integrand and returning to .

Inequality

Probability density function f(x1,,xK1;α1,,αK) plays a key role in a multifunctional inequality which implies various bounds for the Dirichlet distribution.[19]

Another inequality relates the moment-generating function of the Dirichlet distribution to the convex conjugate of the scaled reversed Kullback-Leibler divergence:[20]

logE(expi=1KsiXi)suppi=1K(pisiαilog(αiα0pi)), where the supremum is taken over Template:Mvar spanning the Template:Math-simplex.

Related distributions

When X=(X1,,XK)Dir(α1,,αK), the marginal distribution of each component XiBeta(αi,α0αi), a Beta distribution. In particular, if Template:Math then X1Beta(α1,α2) is equivalent to X=(X1,1X1)Dir(α1,α2).

For Template:Mvar independently distributed Gamma distributions:

Y1Gamma(α1,θ),,YKGamma(αK,θ)

we have:[21]Template:Rp

V=i=1KYiGamma(α0,θ), X=(X1,,XK)=(Y1V,,YKV)Dir(α1,,αK).

Although the Xis are not independent from one another, they can be seen to be generated from a set of Template:Mvar independent gamma random variables.[21]Template:Rp Unfortunately, since the sum Template:Mvar is lost in forming Template:Mvar (in fact it can be shown that Template:Mvar is stochastically independent of Template:Mvar), it is not possible to recover the original gamma random variables from these values alone. Nevertheless, because independent random variables are simpler to work with, this reparametrization can still be useful for proofs about properties of the Dirichlet distribution.

Conjugate prior of the Dirichlet distribution

Because the Dirichlet distribution is an exponential family distribution it has a conjugate prior. The conjugate prior is of the form:[22]

CD(αv,η)(1B(α))ηexp(kvkαk).

Here v is a Template:Mvar-dimensional real vector and η is a scalar parameter. The domain of (v,η) is restricted to the set of parameters for which the above unnormalized density function can be normalized. The (necessary and sufficient) condition is:[23]

kvk>0 and η>1 and (η0 or kexpvkη<1)

The conjugation property can be expressed as

if [prior: αCD(v,η)] and [observation: xαDirichlet(α)] then [posterior: αxCD(vlogx,η+1)].

In the published literature there is no practical algorithm to efficiently generate samples from CD(αv,η).

Generalization by scaling and translation of log-probabilities

As noted above, Dirichlet variates can be generated by normalizing independent gamma variates. If instead one normalizes generalized gamma variates, one obtains variates from the simplicial generalized beta distribution (SGB).[24] On the other hand, SGB variates can also be obtained by applying the softmax function to scaled and translated logarithms of Dirichlet variates. Specifically, let 𝐱=(x1,,xK)Dir(α) and let 𝐲=(y1,,yK), where applying the logarithm elementwise: 𝐲=softmax(a1log𝐱+log𝐛)𝐱=softmax(alog𝐲alog𝐛) or yk=bkxk1/ai=1Kbixi1/axk=(yk/bk)ai=1K(yi/bi)a where a>0 and 𝐛=(b1,,bK), with all bk>0, then 𝐲SGB(a,𝐛,α). The SGB density function can be derived by noting that the transformation 𝐱𝐲, which is a bijection from the simplex to itself, induces a differential volume change factor[25] of: R(𝐲,a,𝐛)=a1Kk=1Kykxk where it is understood that 𝐱 is recovered as a function of 𝐲, as shown above. This facilitates writing the SGB density in terms of the Dirichlet density, as: fSGB(𝐲a,𝐛,α)=fDir(𝐱α)R(𝐲,a,𝐛) This generalization of the Dirichlet density, via a change of variables, is closely related to a normalizing flow, while it must be noted that the differential volume change is not given by the Jacobian determinant of 𝐱𝐲:KK which is zero, but by the Jacobian determinant of (x1,,xK1)(y1,,yK1), as explained in more detail at Normalizing flow § Simplex flow.

For further insight into the interaction between the Dirichlet shape parameters α, and the transformation parameters a,𝐛, it may be helpful to consider the logarithmic marginals, logxk1xk, which follow the logistic-beta distribution, Bσ(αk,ikαi). See in particular the sections on tail behaviour and generalization with location and scale parameters.

Application

When b1=b2==bK, then the transformation simplifies to 𝐱softmax(a1log𝐱), which is known as temperature scaling in machine learning, where it is used as a calibration transform for multiclass probabilistic classiers.[26] Traditionally the temperature parameter (a here) is learnt discriminatively by minimizing multiclass cross-entropy over a supervised calibration data set with known class labels. But the above PDF transformation mechanism can be used to facilitate also the design of generatively trained calibration models with a temperature scaling component.

Occurrence and applications

Bayesian models

Dirichlet distributions are most commonly used as the prior distribution of categorical variables or multinomial variables in Bayesian mixture models and other hierarchical Bayesian models. (In many fields, such as in natural language processing, categorical variables are often imprecisely called "multinomial variables". Such a usage is unlikely to cause confusion, just as when Bernoulli distributions and binomial distributions are commonly conflated.)

Inference over hierarchical Bayesian models is often done using Gibbs sampling, and in such a case, instances of the Dirichlet distribution are typically marginalized out of the model by integrating out the Dirichlet random variable. This causes the various categorical variables drawn from the same Dirichlet random variable to become correlated, and the joint distribution over them assumes a Dirichlet-multinomial distribution, conditioned on the hyperparameters of the Dirichlet distribution (the concentration parameters). One of the reasons for doing this is that Gibbs sampling of the Dirichlet-multinomial distribution is extremely easy; see that article for more information.


Intuitive interpretations of the parameters

The concentration parameter

Dirichlet distributions are very often used as prior distributions in Bayesian inference. The simplest and perhaps most common type of Dirichlet prior is the symmetric Dirichlet distribution, where all parameters are equal. This corresponds to the case where you have no prior information to favor one component over any other. As described above, the single value Template:Mvar to which all parameters are set is called the concentration parameter. If the sample space of the Dirichlet distribution is interpreted as a discrete probability distribution, then intuitively the concentration parameter can be thought of as determining how "concentrated" the probability mass of the Dirichlet distribution to its center, leading to samples with mass dispersed almost equally among all components, i.e., with a value much less than 1, the mass will be highly concentrated in a few components, and all the rest will have almost no mass, and with a value much greater than 1, the mass will be dispersed almost equally among all the components. See the article on the concentration parameter for further discussion.

String cutting

One example use of the Dirichlet distribution is if one wanted to cut strings (each of initial length 1.0) into Template:Mvar pieces with different lengths, where each piece had a designated average length, but allowing some variation in the relative sizes of the pieces. Recall that α0=i=1Kαi. The αi/α0 values specify the mean lengths of the cut pieces of string resulting from the distribution. The variance around this mean varies inversely with α0.

Example of Dirichlet(1/2,1/3,1/6) distribution
Example of Dirichlet(1/2,1/3,1/6) distribution

Pólya's urn

Consider an urn containing balls of Template:Mvar different colors. Initially, the urn contains Template:Math balls of color 1, Template:Math balls of color 2, and so on. Now perform Template:Mvar draws from the urn, where after each draw, the ball is placed back into the urn with an additional ball of the same color. In the limit as Template:Mvar approaches infinity, the proportions of different colored balls in the urn will be distributed as Template:Math.[27]

For a formal proof, note that the proportions of the different colored balls form a bounded Template:Math-valued martingale, hence by the martingale convergence theorem, these proportions converge almost surely and in mean to a limiting random vector. To see that this limiting vector has the above Dirichlet distribution, check that all mixed moments agree.

Each draw from the urn modifies the probability of drawing a ball of any one color from the urn in the future. This modification diminishes with the number of draws, since the relative effect of adding a new ball to the urn diminishes as the urn accumulates increasing numbers of balls.


Random variate generation

Script error: No such module "labelled list hatnote".

From gamma distribution

With a source of Gamma-distributed random variates, one can easily sample a random vector x=(x1,,xK) from the Template:Mvar-dimensional Dirichlet distribution with parameters (α1,,αK) . First, draw Template:Mvar independent random samples y1,,yK from Gamma distributions each with density

Gamma(αi,1)=yiαi1eyiΓ(αi),

and then set

xi=yij=1Kyj.

<templatestyles src="Template:Hidden begin/styles.css"/>

[Proof]

The joint distribution of the independently sampled gamma variates, {yi}, is given by the product:

eiyii=1Kyiαi1Γ(αi)

Next, one uses a change of variables, parametrising {yi} in terms of y1,y2,,yK1 and i=1Kyi , and performs a change of variables from yx such that x¯=i=1Kyi,x1=y1x¯,x2=y2x¯,,xK1=yK1x¯. Each of the variables 0x1,x2,,xk11 and likewise 0i=1K1xi1. One must then use the change of variables formula, P(x)=P(y(x))|yx| in which |yx| is the transformation Jacobian. Writing y explicitly as a function of x, one obtains y1=x¯x1,y2=x¯x2yK1=x¯xK1,yK=x¯(1i=1K1xi) The Jacobian now looks like |x¯0x10x¯x2x¯x¯1i=1K1xi|

The determinant can be evaluated by noting that it remains unchanged if multiples of a row are added to another row, and adding each of the first K-1 rows to the bottom row to obtain

|x¯0x10x¯x2001|

which can be expanded about the bottom row to obtain the determinant value x¯K1. Substituting for x in the joint pdf and including the Jacobian determinant, one obtains:

[i=1K1(x¯xi)αi1][x¯(1i=1K1xi)]αK1i=1KΓ(αi)x¯K1ex¯=Γ(α¯)[i=1K1(xi)αi1][1i=1K1xi]αK1i=1KΓ(αi)×x¯α¯1ex¯Γ(α¯) where α¯=i=1Kαi. The right-hand side can be recognized as the product of a Dirichlet pdf for the xi and a gamma pdf for x¯. The product form shows the Dirichlet and gamma variables are independent, so the latter can be integrated out by simply omitting it, to obtain: x1,x2,,xK1(1i=1K1xi)αK1i=1K1xiαi1B(α)

Which is equivalent to

i=1Kxiαi1B(α) with support i=1Kxi=1

Below is example Python code to draw the sample:

params = [a1, a2, ..., ak]
sample = [random.gammavariate(a, 1) for a in params]
sample = [v / sum(sample) for v in sample]

This formulation is correct regardless of how the Gamma distributions are parameterized (shape/scale vs. shape/rate) because they are equivalent when scale and rate equal 1.0.

From marginal beta distributions

A less efficient algorithm[28] relies on the univariate marginal and conditional distributions being beta and proceeds as follows. Simulate x1 from

Beta(α1,i=2Kαi)

Then simulate x2,,xK1 in order, as follows. For j=2,,K1, simulate ϕj from

Beta(αj,i=j+1Kαi),

and let

xj=(1i=1j1xi)ϕj.

Finally, set

xK=1i=1K1xi.

This iterative procedure corresponds closely to the "string cutting" intuition described above.

Below is example Python code to draw the sample:

params = [a1, a2, ..., ak]
xs = [random.betavariate(params[0], sum(params[1:]))]
for j in range(1, len(params) - 1):
    phi = random.betavariate(params[j], sum(params[j + 1 :]))
    xs.append((1 - sum(xs)) * phi)
xs.append(1 - sum(xs))

When each alpha is 1

When Template:Math, a sample from the distribution can be found by randomly drawing a set of Template:Math values independently and uniformly from the interval Template:Math, adding the values Template:Math and Template:Math to the set to make it have Template:Math values, sorting the set, and computing the difference between each pair of order-adjacent values, to give Template:Math, ..., Template:Math.

When each alpha is 1/2 and relationship to the hypersphere

When Template:Math, a sample from the distribution can be found by randomly drawing Template:Mvar values independently from the standard normal distribution, squaring these values, and normalizing them by dividing by their sum, to give Template:Math, ..., Template:Math.

A point Template:Math, ..., Template:Math can be drawn uniformly at random from the (Template:Math)-dimensional unit hypersphere (which is the surface of a Template:Mvar-dimensional hyperball) via a similar procedure. Randomly draw Template:Mvar values independently from the standard normal distribution and normalize these coordinate values by dividing each by the constant that is the square root of the sum of their squares.

See also

References

Template:Reflist

External links

Template:ProbDistributions Template:Peter Gustav Lejeune Dirichlet

  1. Script error: No such module "citation/CS1". (Chapter 49: Dirichlet and Inverted Dirichlet Distributions)
  2. Script error: No such module "Citation/CS1".
  3. a b Script error: No such module "citation/CS1".
  4. Eq. (49.9) on page 488 of Kotz, Balakrishnan & Johnson (2000). Continuous Multivariate Distributions. Volume 1: Models and Applications. New York: Wiley.
  5. Script error: No such module "citation/CS1".
  6. Script error: No such module "Citation/CS1".
  7. Script error: No such module "citation/CS1".
  8. Script error: No such module "citation/CS1".
  9. Script error: No such module "citation/CS1".
  10. Script error: No such module "citation/CS1".
  11. Script error: No such module "citation/CS1".
  12. Script error: No such module "citation/CS1".
  13. Script error: No such module "Citation/CS1".
  14. Script error: No such module "citation/CS1"., eq. 8
  15. Script error: No such module "citation/CS1".
  16. Script error: No such module "Citation/CS1".
  17. See Kotz, Balakrishnan & Johnson (2000), Section 8.5, "Connor and Mosimann's Generalization", pp. 519–521.
  18. Script error: No such module "Citation/CS1".
  19. Script error: No such module "Citation/CS1".
  20. Script error: No such module "citation/CS1". Theorem 3.3
  21. a b Script error: No such module "citation/CS1".
  22. Script error: No such module "Citation/CS1".
  23. Script error: No such module "citation/CS1".
  24. Script error: No such module "citation/CS1".
  25. Script error: No such module "citation/CS1".
  26. Script error: No such module "Citation/CS1".
  27. Script error: No such module "Citation/CS1".
  28. Script error: No such module "citation/CS1".