Rényi entropy

From Wikipedia, the free encyclopedia
(Redirected from Renyi entropy)
Jump to navigation Jump to search

Template:Short description In information theory, the Rényi entropy is a quantity that generalizes various notions of entropy, including Hartley entropy, Shannon entropy, collision entropy, and min-entropy. The Rényi entropy is named after Alfréd Rényi, who looked for the most general way to quantify information while preserving additivity for independent events.[1][2] In the context of fractal dimension estimation, the Rényi entropy forms the basis of the concept of generalized dimensions.[3]

The Rényi entropy is important in ecology and statistics as index of diversity. The Rényi entropy is also important in quantum information, where it can be used as a measure of entanglement. In the Heisenberg XY spin chain model, the Rényi entropy as a function of Template:Mvar can be calculated explicitly because it is an automorphic function with respect to a particular subgroup of the modular group.[4][5] In theoretical computer science, the min-entropy is used in the context of randomness extractors.

Definition

The Rényi entropy of order Template:Tmath, where 0<α< and Template:Tmath, is defined as[1] Hα(X)=11αlog(i=1npiα). It is further defined at α=0,1, as Hα(X)=limγαHγ(X).

Here, X is a discrete random variable with possible outcomes in the set 𝒜={x1,x2,...,xn} and corresponding probabilities piPr(X=xi) for Template:Tmath. The resulting unit of information is determined by the base of the logarithm, e.g. shannon for base 2, or nat for base e. If the probabilities are pi=1/n for all Template:Tmath, then all the Rényi entropies of the distribution are equal: Template:Tmath. In general, for all discrete random variables Template:Tmath, Hα(X) is a non-increasing function in Template:Tmath.

Applications often exploit the following relation between the Rényi entropy and the α-norm of the vector of probabilities: Hα(X)=α1αlog(Pα). Here, the discrete probability distribution P=(p1,,pn) is interpreted as a vector in n with pi0 and i=1npi=1.

The Rényi entropy for any α0 is Schur concave. Proven by the Schur–Ostrowski criterion.

Special cases

File:Mplwp reny entropy012inf.svg
Rényi entropy of a random variable with two possible outcomes against Template:Math, where Template:Math. Shown are Template:Math0, Template:Math1, Template:Math2 and Template:Math, with the unit on the vertical axis being the shannon.

As α approaches zero, the Rényi entropy increasingly weighs all events with nonzero probability more equally, regardless of their probabilities. In the limit for Template:Tmath, the Rényi entropy is just the logarithm of the size of the support of Template:Mvar. The limit for α1 is the Shannon entropy. As α approaches infinity, the Rényi entropy is increasingly determined by the events of highest probability.

Hartley or max-entropy

H0(X) is logn where n is the number of non-zero probabilities.[6] If the probabilities are all nonzero, it is simply the logarithm of the cardinality of the alphabet (Template:Tmath) of Template:Tmath, sometimes called the Hartley entropy of Template:Tmath, H0(X)=logn=log|𝒜|

Shannon entropy

The limiting value of Hα as α1 is the Shannon entropy:[7] H1(X)limα1Hα(X)=i=1npilogpi

Collision entropy

Collision entropy, sometimes just called "Rényi entropy", refers to the case Template:Tmath, H2(X)=logi=1npi2=logP(X=Y), where X and Y are independent and identically distributed. The collision entropy is related to the index of coincidence. It is the negative logarithm of the Simpson diversity index.

Min-entropy

Script error: No such module "Labelled list hatnote". In the limit as Template:Tmath, the Rényi entropy Hα converges to the min-entropy Template:Tmath: H(X)mini(logpi)=(maxilogpi)=logmaxipi.

Equivalently, the min-entropy H(X) is the largest real number Template:Mvar such that all events occur with probability at most Template:Tmath.

The name min-entropy stems from the fact that it is the smallest entropy measure in the family of Rényi entropies. In this sense, it is the strongest way to measure the information content of a discrete random variable. In particular, the min-entropy is never larger than the Shannon entropy.

The min-entropy has important applications for randomness extractors in theoretical computer science: Extractors are able to extract randomness from random sources that have a large min-entropy; merely having a large Shannon entropy does not suffice for this task.

Inequalities for different orders α

That Hα is non-increasing in α for any given distribution of probabilities Template:Tmath, which can be proven by differentiation,[8] as dHαdα=1(1α)2i=1nzilog(zi/pi)=1(1α)2DKL(zp) which is proportional to Kullback–Leibler divergence (which is always non-negative), where zi=piα/j=1npjα. In particular, it is strictly positive except when the distribution is uniform.

At the α1 limit, we have dHαdα12ipi(lnpi+H(p))2.

In particular cases inequalities can be proven also by Jensen's inequality:[9][10] logn=H0H1H2H.

For values of Template:Tmath, inequalities in the other direction also hold. In particular, we have[11][12] H22H.

On the other hand, the Shannon entropy H1 can be arbitrarily high for a random variable X that has a given min-entropy. An example of this is given by the sequence of random variables Xn{0,,n} for n1 such that P(Xn=0)=1/2 and P(Xn=x)=1/(2n) since H(Xn)=log2 but Template:Tmath.

Rényi divergence

As well as the absolute Rényi entropies, Rényi also defined a spectrum of divergence measures generalising the Kullback–Leibler divergence.[13]

The Rényi divergence of order Template:Tmath or alpha-divergence of a distribution Template:Math from a distribution Template:Math is defined to be Dα(PQ)=1α1log(i=1npiαqiα1)=1α1log𝔼ip[(pi/qi)α1] when Template:Tmath and Template:Tmath. We can define the Rényi divergence for the special values Template:Math by taking a limit, and in particular the limit Template:Math gives the Kullback–Leibler divergence.

Some special cases:

The Rényi divergence is indeed a divergence, meaning simply that Dα(PQ) is greater than or equal to zero, and zero only when Template:Math. For any fixed distributions Template:Math and Template:Math, the Rényi divergence is nondecreasing as a function of its order Template:Math, and it is continuous on the set of Template:Math for which it is finite,[13] or for the sake of brevity, the information of order Template:Math obtained if the distribution Template:Math is replaced by the distribution Template:Math.[1]

Financial interpretation

A pair of probability distributions can be viewed as a game of chance in which one of the distributions defines official odds and the other contains the actual probabilities. Knowledge of the actual probabilities allows a player to profit from the game. The expected profit rate is connected to the Rényi divergence as follows[14] ExpectedRate=1RD1(bm)+R1RD1/R(bm), where m is the distribution defining the official odds (i.e. the "market") for the game, b is the investor-believed distribution and R is the investor's risk aversion (the Arrow–Pratt relative risk aversion).

If the true distribution is p (not necessarily coinciding with the investor's belief Template:Tmath), the long-term realized rate converges to the true expectation which has a similar mathematical structure[14] RealizedRate=1R(D1(pm)D1(pb))+R1RD1/R(bm).

Properties specific to α = 1

The value Template:Tmath, which gives the Shannon entropy and the Kullback–Leibler divergence, is the only value at which the chain rule of conditional probability holds exactly: H(A,X)=H(A)+𝔼aA[H(X|A=a)] for the absolute entropies, and DKL(p(x|a)p(a)m(x,a))=DKL(p(a)m(a))+𝔼p(a){DKL(p(x|a)m(x|a))}, for the relative entropies.

The latter in particular means that if we seek a distribution Template:Math which minimizes the divergence from some underlying prior measure Template:Math, and we acquire new information which only affects the distribution of Template:Math, then the distribution of Template:Math remains Template:Math, unchanged.

The other Rényi divergences satisfy the criteria of being positive and continuous, being invariant under 1-to-1 co-ordinate transformations, and of combining additively when Template:Math and Template:Math are independent, so that if Template:Math, then Hα(A,X)=Hα(A)+Hα(X) and Dα(P(A)P(X)Q(A)Q(X))=Dα(P(A)Q(A))+Dα(P(X)Q(X)).

The stronger properties of the α=1 quantities allow the definition of conditional information and mutual information from communication theory.

Exponential families

The Rényi entropies and divergences for an exponential family admit simple expressions[15] Hα(pF(x;θ))=11α(F(αθ)αF(θ)+logEp[e(α1)k(x)]) and Dα(p:q)=JF,α(θ:θ)1α where JF,α(θ:θ)=αF(θ)+(1α)F(θ)F(αθ+(1α)θ) is a Jensen difference divergence.

Physical meaning

The Rényi entropy in quantum physics is not considered to be an observable, due to its nonlinear dependence on the density matrix. (This nonlinear dependence applies even in the special case of the Shannon entropy.) It can, however, be given an operational meaning through the two-time measurements (also known as full counting statistics) of energy transfersScript error: No such module "Unsubst"..

The limit of the quantum mechanical Rényi entropy as α1 is the von Neumann entropy.

See also

Notes

Template:Reflist

References

Template:Refbegin

  • Script error: No such module "citation/CS1".
  • Script error: No such module "Citation/CS1".
  • Script error: No such module "Citation/CS1".
  • Script error: No such module "citation/CS1".
  • Script error: No such module "Citation/CS1".
  • Template:Springer
  • Template:Cite report
  • Script error: No such module "Citation/CS1".
  • Script error: No such module "Citation/CS1".
  • Script error: No such module "Citation/CS1".
  • Script error: No such module "Citation/CS1".
  • Script error: No such module "citation/CS1".
  • Script error: No such module "Citation/CS1".
  • Script error: No such module "Citation/CS1".
  • Script error: No such module "Citation/CS1".
  • Script error: No such module "Citation/CS1".
  • Script error: No such module "Citation/CS1".
  • Script error: No such module "Citation/CS1".
  • Script error: No such module "Citation/CS1".
  • Script error: No such module "citation/CS1".

Template:Refend

Script error: No such module "Navbox".

  1. a b c Template:Harvtxt
  2. Template:Harvtxt
  3. Script error: No such module "Citation/CS1".
  4. Template:Harvtxt
  5. Template:Harvtxt
  6. RFC 4086, page 6
  7. Template:Harvtxt
  8. Template:Harvtxt
  9. H1H2 holds because Template:Tmath.
  10. HH2 holds because Template:Tmath.
  11. H22H holds because logi=1npi2logsupipi2=2logsupipi
  12. Script error: No such module "citation/CS1".
  13. a b Script error: No such module "Citation/CS1".
  14. a b Template:Harvtxt
  15. Template:Harvtxt