<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://debianws.lexgopc.com/wiki143/index.php?action=history&amp;feed=atom&amp;title=Entropy_estimation</id>
	<title>Entropy estimation - Revision history</title>
	<link rel="self" type="application/atom+xml" href="http://debianws.lexgopc.com/wiki143/index.php?action=history&amp;feed=atom&amp;title=Entropy_estimation"/>
	<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Entropy_estimation&amp;action=history"/>
	<updated>2026-05-12T17:08:06Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>http://debianws.lexgopc.com/wiki143/index.php?title=Entropy_estimation&amp;diff=6986591&amp;oldid=prev</id>
		<title>imported&gt;David Eppstein: remove unwanted no-break spaces</title>
		<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Entropy_estimation&amp;diff=6986591&amp;oldid=prev"/>
		<updated>2025-04-28T07:41:09Z</updated>

		<summary type="html">&lt;p&gt;remove unwanted no-break spaces&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{Short description|Methods of estimating differential entropy given some observations}}&lt;br /&gt;
In various science/engineering applications, such as [[independent component analysis]],&amp;lt;ref&amp;gt;Dinh-Tuan Pham (2004) Fast algorithms for mutual information based independent component analysis. In &amp;#039;&amp;#039;Signal Processing&amp;#039;&amp;#039;. Volume 52,  Issue 10, 2690–2700, {{doi|10.1109/TSP.2004.834398}}&amp;lt;/ref&amp;gt; [[image analysis]],&amp;lt;ref&amp;gt;Chang, C.-I.; Du, Y.; Wang, J.; Guo, S.-M.; Thouin, P.D. (2006) Survey and comparative analysis of entropy and relative entropy thresholding techniques. In &amp;#039;&amp;#039;Vision, Image and Signal Processing&amp;#039;&amp;#039;, Volume 153,  Issue 6, 837–850, {{doi|10.1049/ip-vis:20050032}}&amp;lt;/ref&amp;gt; [[genetic analysis]],&amp;lt;ref&amp;gt;Martins, D. C. &amp;#039;&amp;#039;et al.&amp;#039;&amp;#039; (2008) Intrinsically Multivariate Predictive Genes. In &amp;#039;&amp;#039;Selected Topics in Signal Processing&amp;#039;&amp;#039;. Volume 2,  Issue 3, 424–439, {{doi|10.1109/JSTSP.2008.923841}}&amp;lt;/ref&amp;gt; [[speech recognition]],&amp;lt;ref&amp;gt;Gue Jun Jung; Yung-Hwan Oh (2008) Information Distance-Based Subvector Clustering for ASR Parameter Quantization. In &amp;#039;&amp;#039;Signal Processing Letters&amp;#039;&amp;#039;, Volume 15, 209–212, {{doi|10.1109/LSP.2007.913132 }}&amp;lt;/ref&amp;gt; [[manifold learning]],&amp;lt;ref&amp;gt;Costa, J.A.; Hero, A.O. (2004), Geodesic entropic graphs for dimension and entropy estimation in manifold learning. In &amp;#039;&amp;#039;Signal Processing&amp;#039;&amp;#039;, Volume 52,  Issue 8,  2210–2221, {{doi|10.1109/TSP.2004.831130}}&amp;lt;/ref&amp;gt; and time delay estimation&amp;lt;ref&amp;gt;Benesty, J.; Yiteng Huang; Jingdong Chen (2007) Time Delay Estimation via Minimum Entropy. In &amp;#039;&amp;#039;Signal Processing Letters&amp;#039;&amp;#039;, Volume 14,  Issue 3,  March 2007 157–160 {{doi|10.1109/LSP.2006.884038 }}&amp;lt;/ref&amp;gt; it is useful to estimate the [[differential entropy]] of a system or process, given some observations.&lt;br /&gt;
&lt;br /&gt;
The simplest and most common approach uses [[histogram]]-based estimation, but other approaches have been developed and used, each with its own benefits and drawbacks.&amp;lt;ref name=&amp;quot;beirlant&amp;quot;&amp;gt;J. Beirlant, E. J. Dudewicz, L. Gyorfi, and E. C. van der Meulen (1997) [http://www.its.caltech.edu/~jimbeck/summerlectures/references/Entropy%20estimation.pdf Nonparametric entropy estimation: An overview]. In &amp;#039;&amp;#039;International Journal of Mathematical and Statistical Sciences&amp;#039;&amp;#039;, Volume 6, pp. 17– &lt;br /&gt;
39.&amp;lt;/ref&amp;gt; The main factor in choosing a method is often  a trade-off between the bias and the variance of the estimate,&amp;lt;ref name=&amp;quot;schurmann&amp;quot;&amp;gt;T. Schürmann, Bias analysis in entropy estimation. In &amp;#039;&amp;#039;J. Phys. A: Math. Gen&amp;#039;&amp;#039;, 37 (2004), pp. L295–L301. {{doi|10.1088/0305-4470/37/27/L02}}&amp;lt;/ref&amp;gt; although the nature of the (suspected) distribution of the data may also be a factor,&amp;lt;ref name=&amp;quot;beirlant&amp;quot;/&amp;gt; as well as the sample size and the size of the alphabet of the probability distribution.&amp;lt;ref name=&amp;quot;PBP24&amp;quot;&amp;gt;{{Cite journal|author = Pinchas A., Ben-Gal I., Painsky A. (2024)|title = A Comparative Analysis of Discrete Entropy Estimators for Large-Alphabet Problems | journal=Entropy | date=2024 | volume=26 | issue=5 | page=369 |url =  https://www.iradbengal.sites.tau.ac.il/_files/ugd/901879_2b241babf65440caa600fbd04bc67b46.pdf |publisher = Entropy. 2024; 26(5):369. doi:10.3390/e26050369| doi=10.3390/e26050369 | doi-access=free |pmid = 38785618 |pmc = 11120205 |bibcode = 2024Entrp..26..369P }}&amp;lt;/ref&amp;gt; &lt;br /&gt;
&lt;br /&gt;
==Histogram estimator==&lt;br /&gt;
&lt;br /&gt;
The histogram approach uses the idea that the differential entropy of a probability distribution &amp;lt;math&amp;gt;f(x)&amp;lt;/math&amp;gt; for a continuous random variable &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt;, &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;h(X) = -\int_\mathbb{X} f(x)\log f(x)\,dx&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
can be approximated by first approximating &amp;lt;math&amp;gt;f(x)&amp;lt;/math&amp;gt; with a [[histogram]] of the observations, and then finding the [[Entropy_(information_theory)|discrete entropy]] of a quantization of &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
H(X)  =  - \sum_{i=1}^nf(x_i)\log \left(\frac{f(x_i)}{w(x_i)} \right)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
with bin probabilities given by that histogram. The histogram is itself a [[Maximum likelihood|maximum-likelihood (ML) estimate]] of the discretized frequency distribution {{Citation needed|date=October 2018}}), where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is the width of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th bin. Histograms can be quick to calculate, and simple, so this approach has some attraction. However, the estimate produced is [[Bias_(statistics)|bias]]ed, and although corrections can be made to the estimate, they may not always be satisfactory.&amp;lt;ref name=&amp;quot;miller55&amp;quot;&amp;gt;G. Miller (1955) Note on the bias of information estimates. In &amp;#039;&amp;#039;Information Theory in Psychology: Problems and Methods&amp;#039;&amp;#039;, pp. 95–100.&amp;lt;/ref&amp;gt; &lt;br /&gt;
&lt;br /&gt;
A method better suited for multidimensional [[probability density function]]s (pdf) is to first make a [[Density estimation|pdf estimate]] with some method, and then, from the pdf estimate, compute the entropy. A useful pdf estimate method is e.g. Gaussian [[mixture model]]ing (GMM), where the [[expectation maximization]] (EM) algorithm is used to find an ML estimate of a [[weighted sum]] of Gaussian pdf&amp;#039;s approximating the data pdf.&lt;br /&gt;
&lt;br /&gt;
==Estimates based on sample-spacings==&lt;br /&gt;
If the data is one-dimensional, we can imagine taking all the observations and putting them in order of their value. The spacing between one value and the next then gives us a rough idea of (the [[Multiplicative inverse|reciprocal]] of) the probability density in that region: the closer together the values are, the higher the probability density. This is a very rough estimate with high [[variance]], but can be improved, for example by thinking about the space between a given value and the one &amp;#039;&amp;#039;m&amp;#039;&amp;#039; away from it, where &amp;#039;&amp;#039;m&amp;#039;&amp;#039; is some fixed number.&amp;lt;ref name=&amp;quot;beirlant&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The probability density estimated in this way can then be used to calculate the entropy estimate, in a similar way to that given above for the histogram, but with some slight tweaks.&lt;br /&gt;
&lt;br /&gt;
One of the main drawbacks with this approach is going beyond one dimension: the idea of lining the data points up in order falls apart in more than one dimension. However, using analogous methods, some multidimensional entropy estimators have been developed.&amp;lt;ref name=&amp;quot;lm2003&amp;quot;&amp;gt;E. G. Learned-Miller (2003) A new class of entropy estimators for multi-dimensional densities, in &amp;#039;&amp;#039;Proceedings of the [[International Conference on Acoustics, Speech, and Signal Processing]] (ICASSP’03)&amp;#039;&amp;#039;, vol. 3, April 2003, pp. 297–300.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;il2010&amp;quot;&amp;gt;I. Lee (2010) Sample-spacings based density and entropy estimators for spherically invariant multidimensional data, In &amp;#039;&amp;#039;Neural Computation&amp;#039;&amp;#039;, vol. 22, issue 8, April 2010, pp. 2208–2227.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Estimates based on nearest-neighbours==&lt;br /&gt;
For each point in our dataset, we can find the distance to its [[Nearest neighbour_distribution|nearest neighbour]]. We can in fact estimate the entropy from the distribution of the nearest-neighbour-distance of our datapoints.&amp;lt;ref name=&amp;quot;beirlant&amp;quot;/&amp;gt; (In a uniform distribution these distances all tend to be fairly similar, whereas in a strongly nonuniform distribution they may vary a lot more.)&lt;br /&gt;
&lt;br /&gt;
== Bayesian estimator ==&lt;br /&gt;
When in under-sampled regime, having a prior on the distribution can help the estimation. One such [[Bayesian estimator]] was proposed in the neuroscience context known as the NSB ([[Ilya Nemenman|Nemenman]]–Shafee–[[William Bialek|Bialek]]) estimator.&amp;lt;ref name=Nemenman2003&amp;gt;Ilya Nemenman, Fariel Shafee, William Bialek (2003) Entropy and Inference, Revisited. Advances in Neural Information Processing&amp;lt;/ref&amp;gt;&amp;lt;ref name=Nemenman2004&amp;gt;Ilya Nemenman, [[William Bialek]], de Ruyter (2004) Entropy and information in neural spike trains: Progress on the sampling problem. Physical Review E&amp;lt;/ref&amp;gt; The NSB estimator uses a mixture of [[Dirichlet distribution|Dirichlet prior]], chosen such that the induced prior over the entropy is approximately uniform.&lt;br /&gt;
&lt;br /&gt;
== Estimates based on expected entropy ==&lt;br /&gt;
A new approach to the problem of entropy evaluation is to compare the expected entropy of a sample of random sequence with the calculated entropy of the sample. The method gives very accurate results, but it is limited to calculations of random sequences modeled as [[Markov chain]]s of the first order with small values of bias and correlations. This is the first known method that takes into account the size of the sample sequence and its impact on the accuracy of the calculation of entropy.&amp;lt;ref name=&amp;quot;Lesniewicz2014&amp;quot;&amp;gt;Marek Lesniewicz (2014) Expected Entropy as a Measure and Criterion of Randomness of Binary Sequences [http://pe.org.pl/abstract_pl.php?nid=8206] In &amp;#039;&amp;#039;Przeglad Elektrotechniczny&amp;#039;&amp;#039;, Volume 90, 1/2014, pp. 42– 46.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Lesniewicz2016&amp;quot;&amp;gt;Marek Lesniewicz (2016) Analyses and Measurements of Hardware Generated Random Binary Sequences Modeled as Markov Chains [http://pe.org.pl/abstract_pl.php?nid=10232] In &amp;#039;&amp;#039;Przeglad Elektrotechniczny&amp;#039;&amp;#039;, Volume 92, 11/2016, pp. 268-274.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Deep Neural Network estimator ==&lt;br /&gt;
A deep neural network (DNN) can be used to estimate the joint entropy and called Neural Joint Entropy Estimator (NJEE).&amp;lt;ref name=&amp;quot;SPB22&amp;quot;&amp;gt;{{Cite journal |author=Shalev y., Painsky A., and Ben-Gal I. (2022) |title=Neural Joint Entropy Estimation |journal=IEEE Transactions on Neural Networks and Learning Systems |date=2024 |volume=35 |issue=4 |pages=5488–5500 |url=  https://www.iradbengal.sites.tau.ac.il/_files/ugd/901879_d51bc0a620734585b5d3154488b3ae84.pdf |publisher=IEEE Transactions on Neural Network and Learning Systems (TNNLS) |doi=10.1109/TNNLS.2022.3204919 |pmid=36155469 |arxiv=2012.11197 }}&amp;lt;/ref&amp;gt; Practically, the DNN is trained as a classifier that maps an input vector or matrix X to an output probability distribution over the possible classes of random variable Y, given input X. For example, in an image classification task, the NJEE maps a vector of pixel values to probabilities over possible image classes. In practice, the probability distribution of Y is obtained by a Softmax layer with number of nodes that is equal to the alphabet size of Y. NJEE uses continuously differentiable activation functions, such that the conditions for the universal approximation theorem holds. It is shown that this method provides a strongly consistent estimator and outperforms other methods in case of large alphabet sizes.&amp;lt;ref name=&amp;quot;SPB22&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;PBP24&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
{{reflist}}&lt;br /&gt;
&lt;br /&gt;
{{DEFAULTSORT:Entropy Estimation}}&lt;br /&gt;
[[Category:Entropy and information]]&lt;br /&gt;
[[Category:Information theory]]&lt;br /&gt;
[[Category:Statistical randomness]]&lt;br /&gt;
[[Category:Random number generation]]&lt;/div&gt;</summary>
		<author><name>imported&gt;David Eppstein</name></author>
	</entry>
</feed>