Information theory: Difference between revisions
imported>Remsense Reverting edit(s) by KibalchishTheCoder (talk) to rev. 1293998182 by Headbomb: No reliable source (UV 0.1.6) |
imported>EulerianTrail m →Historical background: removed more over linking |
||
| Line 4: | Line 4: | ||
{{Information theory}} | {{Information theory}} | ||
'''Information theory''' is the mathematical study of the [[quantification (science)|quantification]], [[Data storage|storage]], and [[telecommunications|communication]] of [[information]]. The field was established and formalized by [[Claude Shannon]] in the 1940s,<ref>{{Cite journal |last=Schneider |first=Thomas D. |date=2006 |title=Claude Shannon: Biologist |journal=IEEE Engineering in Medicine and Biology Magazine |volume=25 |issue=1 |pages=30–33 |doi=10.1109/memb.2006.1578661 |issn=0739-5175 |pmc=1538977 |pmid=16485389}}</ref> though early contributions were made in the 1920s through the works of [[Harry Nyquist]] and [[Ralph Hartley]]. It is at the intersection of [[electronic engineering]], [[mathematics]], [[statistics]], [[computer science]], [[Neuroscience|neurobiology]], [[physics]], and [[electrical engineering]].<ref name=":2">{{Cite journal |last1=Cruces |first1=Sergio |last2=Martín-Clemente |first2=Rubén |last3=Samek |first3=Wojciech |date=2019-07-03 |title=Information Theory Applications in Signal Processing |journal=Entropy |volume=21 |issue=7 | | '''Information theory''' is the mathematical study of the [[quantification (science)|quantification]], [[Data storage|storage]], and [[telecommunications|communication]] of [[information]]. The field was established and formalized by [[Claude Shannon]] in the 1940s,<ref>{{Cite journal |last=Schneider |first=Thomas D. |date=2006 |title=Claude Shannon: Biologist |journal=IEEE Engineering in Medicine and Biology Magazine |volume=25 |issue=1 |pages=30–33 |doi=10.1109/memb.2006.1578661 |issn=0739-5175 |pmc=1538977 |pmid=16485389}}</ref> though early contributions were made in the 1920s through the works of [[Harry Nyquist]] and [[Ralph Hartley]]. It is at the intersection of [[electronic engineering]], [[mathematics]], [[statistics]], [[computer science]], [[Neuroscience|neurobiology]], [[physics]], and [[electrical engineering]].<ref name=":2">{{Cite journal |last1=Cruces |first1=Sergio |last2=Martín-Clemente |first2=Rubén |last3=Samek |first3=Wojciech |date=2019-07-03 |title=Information Theory Applications in Signal Processing |journal=Entropy |volume=21 |issue=7 |page=653 |doi=10.3390/e21070653 |doi-access=free |issn=1099-4300 |pmc=7515149 |pmid=33267367|bibcode=2019Entrp..21..653C }}</ref><ref name=":0">{{Cite book |url=https://books.google.com/books?id=TNpVEAAAQBAJ&pg=PA23 |title=Fractional Order Systems and Applications in Engineering |publisher=Academic Press |year=2023 |isbn=978-0-323-90953-2 |editor-last=Baleanu |editor-first=D. |series=Advanced Studies in Complex Systems |location=London, United Kingdom |page=23 |language=en |oclc=on1314337815 |editor-last2=Balas |editor-first2=Valentina Emilia |editor-last3=Agarwal |editor-first3=Praveen}}</ref> | ||
A key measure in information theory is [[information entropy|entropy]]. Entropy quantifies the amount of uncertainty involved in the value of a [[random variable]] or the outcome of a [[random process]]. For example, identifying the outcome of a [[Fair coin|fair]] [[coin flip]] (which has two equally likely outcomes) provides less information (lower entropy, less uncertainty) than identifying the outcome from a roll of a [[dice|die]] (which has six equally likely outcomes). Some other important measures in information theory are [[mutual information]], [[channel capacity]], [[error exponent]]s, and [[relative entropy]]. Important sub-fields of information theory include [[source coding]], [[algorithmic complexity theory]], [[algorithmic information theory]] and [[information-theoretic security]]. | A key measure in information theory is [[information entropy|entropy]]. Entropy quantifies the amount of uncertainty involved in the value of a [[random variable]] or the outcome of a [[random process]]. For example, identifying the outcome of a [[Fair coin|fair]] [[coin flip]] (which has two equally likely outcomes) provides less information (lower entropy, less uncertainty) than identifying the outcome from a roll of a [[dice|die]] (which has six equally likely outcomes). Some other important measures in information theory are [[mutual information]], [[channel capacity]], [[error exponent]]s, and [[relative entropy]]. Important sub-fields of information theory include [[source coding]], [[algorithmic complexity theory]], [[algorithmic information theory]] and [[information-theoretic security]]. | ||
Applications of fundamental topics of information theory include source coding/[[data compression]] (e.g. for [[ZIP (file format)|ZIP files]]), and channel coding/[[error detection and correction]] (e.g. for [[digital subscriber line|DSL]]). Its impact has been crucial to the success of the [[Voyager program|Voyager]] missions to deep space,<ref name=":22">{{Cite web |last=Horgan |first=John |author-link=John Horgan (journalist) |date=2016-04-27 |title=Claude Shannon: Tinkerer, Prankster, and Father of Information Theory |url=https://spectrum.ieee.org/claude-shannon-tinkerer-prankster-and-father-of-information-theory |access-date=2024-11-08 |website=[[IEEE]] |language=en | Applications of fundamental topics of information theory include source coding/[[data compression]] (e.g. for [[ZIP (file format)|ZIP files]]), and channel coding/[[error detection and correction]] (e.g. for [[digital subscriber line|DSL]]). Its impact has been crucial to the success of the [[Voyager program|Voyager]] missions to deep space,<ref name=":22">{{Cite web |last=Horgan |first=John |author-link=John Horgan (journalist) |date=2016-04-27 |title=Claude Shannon: Tinkerer, Prankster, and Father of Information Theory |url=https://spectrum.ieee.org/claude-shannon-tinkerer-prankster-and-father-of-information-theory |access-date=2024-11-08 |website=[[IEEE]] |language=en |archive-date=2024-11-10 |archive-url=https://web.archive.org/web/20241110233848/https://spectrum.ieee.org/claude-shannon-tinkerer-prankster-and-father-of-information-theory |url-status=live }}</ref> the invention of the [[compact disc]], the feasibility of mobile phones and the development of the [[Internet]] and [[artificial intelligence]].<ref>{{Cite book |last=Shi |first=Zhongzhi |url=https://books.google.com/books?id=xMTFCgAAQBAJ |title=Advanced Artificial Intelligence |date=2011 |publisher=[[World Scientific Publishing]] |isbn=978-981-4291-34-7 |page=2 |language=en |doi=10.1142/7547 |archive-date=2024-11-10 |access-date=2024-11-09 |archive-url=https://web.archive.org/web/20241110202013/https://books.google.com/books?id=xMTFCgAAQBAJ |url-status=live }}</ref><ref>{{Cite book |last1=Sinha |first1=Sudhi |url=https://books.google.com/books?id=2pb-DwAAQBAJ |title=Reimagining Businesses with AI |last2=Al Huraimel |first2=Khaled |date=2020-10-20 |publisher=Wiley |isbn=978-1-119-70915-2 |edition=1 |page=4 |language=en |doi=10.1002/9781119709183}}</ref><ref name=":0" /> The theory has also found applications in other areas, including [[statistical inference]],<ref>{{cite book|last1=Burnham|first1=K. P.|last2=Anderson|first2=D. R.|year=2002|title=Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach|edition=Second|language=en|publisher=Springer Science|location=New York|isbn=978-0-387-95364-9}}</ref> [[cryptography]], [[neurobiology]],<ref name="Spikes">{{cite book|title=Spikes: Exploring the Neural Code|author1=F. Rieke|author2=D. Warland|author3=R Ruyter van Steveninck|author4=W Bialek|publisher=The MIT press|year=1997|isbn=978-0-262-68108-7}}</ref> [[perception]],<ref>{{Cite journal|last1=Delgado-Bonal|first1=Alfonso|last2=Martín-Torres|first2=Javier|date=2016-11-03|title=Human vision is determined based on information theory|journal=Scientific Reports|language=En|volume=6|issue=1|article-number=36038|bibcode=2016NatSR...636038D|doi=10.1038/srep36038|issn=2045-2322|pmc=5093619|pmid=27808236}}</ref> [[signal processing]],<ref name=":2" /> [[linguistics]], the evolution<ref>{{cite journal|last1=cf|last2=Huelsenbeck|first2=J. P.|last3=Ronquist|first3=F.|last4=Nielsen|first4=R.|last5=Bollback|first5=J. P.|year=2001|title=Bayesian inference of phylogeny and its impact on evolutionary biology|url=https://archive.org/details/sim_science_2001-12-14_294_5550/page/2310|journal=Science|volume=294|issue=5550|pages=2310–2314|bibcode=2001Sci...294.2310H|doi=10.1126/science.1065889|pmid=11743192|s2cid=2138288}}</ref> and function<ref>{{cite journal|last1=Allikmets|first1=Rando|last2=Wasserman|first2=Wyeth W.|last3=Hutchinson|first3=Amy|last4=Smallwood|first4=Philip|last5=Nathans|first5=Jeremy|last6=Rogan|first6=Peter K.|year=1998|title=Thomas D. Schneider], Michael Dean (1998) Organization of the ABCR gene: analysis of promoter and splice junction sequences|url=http://alum.mit.edu/www/toms/|journal=Gene|volume=215|issue=1|pages=111–122|doi=10.1016/s0378-1119(98)00269-8|pmid=9666097|doi-access=free|archive-date=2008-08-21|access-date=2010-10-18|archive-url=https://web.archive.org/web/20080821124201/http://alum.mit.edu/www/toms|url-status=live}}</ref> of molecular codes ([[bioinformatics]]), [[thermal physics]],<ref>{{cite journal|last1=Jaynes|first1=E. T.|year=1957|title=Information Theory and Statistical Mechanics|url=http://bayes.wustl.edu/|journal=Phys. Rev.|volume=106|issue=4|page=620|bibcode=1957PhRv..106..620J|doi=10.1103/physrev.106.620|s2cid=17870175|archive-date=2011-08-30|access-date=2008-03-13|archive-url=https://web.archive.org/web/20110830215943/http://bayes.wustl.edu/|url-status=live}}</ref> [[molecular dynamics]],<ref>{{Cite journal|last1=Talaat|first1=Khaled|last2=Cowen|first2=Benjamin|last3=Anderoglu|first3=Osman|date=2020-10-05|title=Method of information entropy for convergence assessment of molecular dynamics simulations|journal=Journal of Applied Physics|language=En|volume=128|issue=13|page=135102|doi=10.1063/5.0019078|bibcode=2020JAP...128m5102T|osti=1691442|s2cid=225010720|doi-access=free}}</ref> [[black hole]]s, [[quantum computing]], [[information retrieval]], [[Intelligence (Information Gathering)|intelligence gathering]], [[plagiarism detection]],<ref>{{cite journal|last1=Bennett|first1=Charles H.|last2=Li|first2=Ming|last3=Ma|first3=Bin|year=2003|title=Chain Letters and Evolutionary Histories|url=http://sciamdigital.com/index.cfm?fa=Products.ViewIssuePreview&ARTICLEID_CHAR=08B64096-0772-4904-9D48227D5C9FAC75|journal=[[Scientific American]]|volume=288|issue=6|pages=76–81|bibcode=2003SciAm.288f..76B|doi=10.1038/scientificamerican0603-76|pmid=12764940|access-date=2008-03-11|archive-url=https://web.archive.org/web/20071007041539/http://www.sciamdigital.com/index.cfm?fa=Products.ViewIssuePreview&ARTICLEID_CHAR=08B64096-0772-4904-9D48227D5C9FAC75|archive-date=2007-10-07}}</ref> [[pattern recognition]], [[anomaly detection]],<ref>{{Cite web|url=http://aicanderson2.home.comcast.net/~aicanderson2/home.pdf|title=Some background on why people in the empirical sciences may want to better understand the information-theoretic methods|author=David R. Anderson|date=November 1, 2003|archive-url=https://web.archive.org/web/20110723045720/http://aicanderson2.home.comcast.net/~aicanderson2/home.pdf|archive-date=July 23, 2011|access-date=2010-06-23}} | ||
</ref> the analysis of [[music]],<ref>{{Citation |last=Loy |first=D. Gareth |title=Music, Expectation, and Information Theory |date=2017 |work=The Musical-Mathematical Mind: Patterns and Transformations |series=Computational Music Science |pages=161–169 |editor-last=Pareyon |editor-first=Gabriel |url=https://link.springer.com/chapter/10.1007/978-3-319-47337-6_17 |access-date=2024-09-19 |place=Cham |publisher=Springer International Publishing |language=en |doi=10.1007/978-3-319-47337-6_17 |isbn=978-3-319-47337-6 |editor2-last=Pina-Romero |editor2-first=Silvia |editor3-last=Agustín-Aquino |editor3-first=Octavio A. |editor4-last=Lluis-Puebla |editor4-first=Emilio}}</ref><ref>{{Cite journal |last1=Rocamora |first1=Martín |last2=Cancela |first2=Pablo |last3=Biscainho |first3=Luiz |date=2019-04-05 |title=Information Theory Concepts Applied to the Analysis of Rhythm in Recorded Music with Recurrent Rhythmic Patterns |url=http://www.aes.org/e-lib/browse.cfm?elib=20449 |journal=Journal of the Audio Engineering Society |volume=67 |issue=4 |pages=160–173 |doi=10.17743/jaes.2019.0003}}</ref> [[Art|art creation]],<ref>{{Cite journal |last=Marsden |first=Alan |date=2020 |title=New Prospects for Information Theory in Arts Research |url=https://direct.mit.edu/leon/article/53/3/274-280/96875 |journal=Leonardo |language=en |volume=53 |issue=3 |pages=274–280 |doi=10.1162/leon_a_01860 |issn=0024-094X}}</ref> [[imaging system]] design,<ref>{{Cite arXiv|title=Universal evaluation and design of imaging systems using information estimation|eprint=2405.20559 |last1=Pinkard |first1=Henry |last2=Kabuli |first2=Leyla |last3=Markley |first3=Eric |last4=Chien |first4=Tiffany |last5=Jiao |first5=Jiantao |last6=Waller |first6=Laura |date=2024 |class=physics.optics }}</ref> study of [[outer space]],<ref>{{Cite journal |last1=Wing |first1=Simon |last2=Johnson |first2=Jay R. |date=2019-02-01 |title=Applications of Information Theory in Solar and Space Physics |journal=Entropy |language=en |volume=21 |issue=2 | | </ref> the analysis of [[music]],<ref>{{Citation |last=Loy |first=D. Gareth |title=Music, Expectation, and Information Theory |date=2017 |work=The Musical-Mathematical Mind: Patterns and Transformations |series=Computational Music Science |pages=161–169 |editor-last=Pareyon |editor-first=Gabriel |url=https://link.springer.com/chapter/10.1007/978-3-319-47337-6_17 |access-date=2024-09-19 |place=Cham |publisher=Springer International Publishing |language=en |doi=10.1007/978-3-319-47337-6_17 |isbn=978-3-319-47337-6 |editor2-last=Pina-Romero |editor2-first=Silvia |editor3-last=Agustín-Aquino |editor3-first=Octavio A. |editor4-last=Lluis-Puebla |editor4-first=Emilio|url-access=subscription }}</ref><ref>{{Cite journal |last1=Rocamora |first1=Martín |last2=Cancela |first2=Pablo |last3=Biscainho |first3=Luiz |date=2019-04-05 |title=Information Theory Concepts Applied to the Analysis of Rhythm in Recorded Music with Recurrent Rhythmic Patterns |url=http://www.aes.org/e-lib/browse.cfm?elib=20449 |journal=Journal of the Audio Engineering Society |volume=67 |issue=4 |pages=160–173 |doi=10.17743/jaes.2019.0003|url-access=subscription }}</ref> [[Art|art creation]],<ref>{{Cite journal |last=Marsden |first=Alan |date=2020 |title=New Prospects for Information Theory in Arts Research |url=https://direct.mit.edu/leon/article/53/3/274-280/96875 |journal=Leonardo |language=en |volume=53 |issue=3 |pages=274–280 |doi=10.1162/leon_a_01860 |issn=0024-094X}}</ref> [[imaging system]] design,<ref>{{Cite arXiv|title=Universal evaluation and design of imaging systems using information estimation|eprint=2405.20559 |last1=Pinkard |first1=Henry |last2=Kabuli |first2=Leyla |last3=Markley |first3=Eric |last4=Chien |first4=Tiffany |last5=Jiao |first5=Jiantao |last6=Waller |first6=Laura |date=2024 |class=physics.optics }}</ref> study of [[outer space]],<ref>{{Cite journal |last1=Wing |first1=Simon |last2=Johnson |first2=Jay R. |date=2019-02-01 |title=Applications of Information Theory in Solar and Space Physics |journal=Entropy |language=en |volume=21 |issue=2 |page=140 |doi=10.3390/e21020140 |issn=1099-4300 |pmc=7514618 |pmid=33266856 |doi-access=free|bibcode=2019Entrp..21..140W }}</ref> the dimensionality of [[space]],<ref>{{Cite journal |last=Kak |first=Subhash |date=2020-11-26 |title=Information theory and dimensionality of space |journal=Scientific Reports |language=en |volume=10 |issue=1 |article-number=20733 |doi=10.1038/s41598-020-77855-9 |pmid=33244156 |pmc=7693271 |issn=2045-2322}}</ref> and [[epistemology]].<ref>{{Cite journal |last=Harms |first=William F. |date=1998 |title=The Use of Information Theory in Epistemology |url=https://www.jstor.org/stable/188281 |journal=Philosophy of Science |volume=65 |issue=3 |pages=472–501 |doi=10.1086/392657 |jstor=188281 |issn=0031-8248}}</ref> | ||
==Overview== | ==Overview== | ||
Information theory studies the transmission, processing, extraction, and utilization of [[information]]. Abstractly, information can be thought of as the resolution of uncertainty. In the case of communication of information over a noisy channel, this abstract concept was formalized in 1948 by | Information theory studies the transmission, processing, extraction, and utilization of [[information]]. Abstractly, information can be thought of as the resolution of uncertainty. In the case of communication of information over a noisy channel, this abstract concept was formalized in 1948 by Claude Shannon in a paper entitled ''[[A Mathematical Theory of Communication]]'', in which information is thought of as a set of possible messages, and the goal is to send these messages over a noisy channel, and to have the receiver reconstruct the message with low probability of error, in spite of the channel noise. Shannon's main result, the [[noisy-channel coding theorem]], showed that, in the limit of many channel uses, the rate of information that is asymptotically achievable is equal to the channel capacity, a quantity dependent merely on the statistics of the channel over which the messages are sent.<ref name="Spikes" /> | ||
Coding theory is concerned with finding explicit methods, called ''codes'', for increasing the efficiency and reducing the error rate of data communication over noisy channels to near the channel capacity. These codes can be roughly subdivided into data compression (source coding) and [[error-correction]] (channel coding) techniques. In the latter case, it took many years to find the methods Shannon's work proved were possible.<ref>{{Cite book |last1=Berrou |first1=C. |last2=Glavieux |first2=A. |last3=Thitimajshima |first3=P. |chapter=Near Shannon limit error-correcting coding and decoding: Turbo-codes. 1 |date=May 1993 |title=Proceedings of ICC '93 - IEEE International Conference on Communications | Coding theory is concerned with finding explicit methods, called ''codes'', for increasing the efficiency and reducing the error rate of data communication over noisy channels to near the channel capacity. These codes can be roughly subdivided into data compression (source coding) and [[error-correction]] (channel coding) techniques. In the latter case, it took many years to find the methods Shannon's work proved were possible.<ref>{{Cite book |last1=Berrou |first1=C. |last2=Glavieux |first2=A. |last3=Thitimajshima |first3=P. |chapter=Near Shannon limit error-correcting coding and decoding: Turbo-codes. 1 |date=May 1993 |title=Proceedings of ICC '93 - IEEE International Conference on Communications |volume=2 |pages=1064–1070 vol.2 |doi=10.1109/ICC.1993.397441|isbn=0-7803-0950-2 }}</ref><ref>{{Cite journal |last=MacKay |first=D.J.C. |date=March 1999 |title=Good error-correcting codes based on very sparse matrices |journal=IEEE Transactions on Information Theory |volume=45 |issue=2 |pages=399–431 |doi=10.1109/18.748992 |bibcode=1999ITIT...45..399M |issn=1557-9654}}</ref> | ||
A third class of information theory codes are [[cryptographic algorithm]]s (both [[code (cryptography)|code]]s and [[cipher]]s). Concepts, methods and results from coding theory and information theory are widely used in cryptography and [[cryptanalysis]],<ref>{{Cite book |last1=Menezes |first1=Alfred J. |url=https://www.taylorfrancis.com/books/9780429881329 |title=Handbook of Applied Cryptography |last2=van Oorschot |first2=Paul C. |last3=Vanstone |first3=Scott A. |date=2018-12-07 |publisher=CRC Press |isbn=978-0-429-46633-5 |edition=1 |language=en |doi=10.1201/9780429466335}}</ref> such as the [[Ban (unit)|unit ban]]. | A third class of information theory codes are [[cryptographic algorithm]]s (both [[code (cryptography)|code]]s and [[cipher]]s). Concepts, methods and results from coding theory and information theory are widely used in cryptography and [[cryptanalysis]],<ref>{{Cite book |last1=Menezes |first1=Alfred J. |url=https://www.taylorfrancis.com/books/9780429881329 |title=Handbook of Applied Cryptography |last2=van Oorschot |first2=Paul C. |last3=Vanstone |first3=Scott A. |date=2018-12-07 |publisher=CRC Press |isbn=978-0-429-46633-5 |edition=1 |language=en |doi=10.1201/9780429466335}}</ref> such as the [[Ban (unit)|unit ban]]. | ||
| Line 21: | Line 21: | ||
{{Main|History of information theory}} | {{Main|History of information theory}} | ||
The landmark event ''establishing'' the discipline of information theory and bringing it to immediate worldwide attention was the publication of Claude | The landmark event ''establishing'' the discipline of information theory and bringing it to immediate worldwide attention was the publication of Claude Shannon's classic paper "A Mathematical Theory of Communication" in the ''[[Bell System Technical Journal]]'' in July and October 1948. Historian [[James Gleick]] rated the paper as the most important development of 1948, noting that the paper was "even more profound and more fundamental" than the [[transistor]].{{sfn|Gleick|2011|pp=3–4}} He came to be known as the "father of information theory".<ref>{{Cite web |last=Horgan |first=John |date=2016-04-27 |title=Claude Shannon: Tinkerer, Prankster, and Father of Information Theory |url=https://spectrum.ieee.org/claude-shannon-tinkerer-prankster-and-father-of-information-theory |access-date=2023-09-30 |website=[[IEEE]] |language=en |archive-date=2023-09-28 |archive-url=https://web.archive.org/web/20230928150930/https://spectrum.ieee.org/claude-shannon-tinkerer-prankster-and-father-of-information-theory |url-status=live }}</ref><ref>{{Cite magazine |last=Roberts |first=Siobhan |date=2016-04-30 |title=The Forgotten Father of the Information Age |language=en-US |magazine=The New Yorker |url=https://www.newyorker.com/tech/annals-of-technology/claude-shannon-the-father-of-the-information-age-turns-1100100 |access-date=2023-09-30 |issn=0028-792X |archive-date=2023-09-28 |archive-url=https://web.archive.org/web/20230928150930/https://www.newyorker.com/tech/annals-of-technology/claude-shannon-the-father-of-the-information-age-turns-1100100 |url-status=live }}</ref><ref name=":1">{{Cite web |last=Tse |first=David |date=2020-12-22 |title=How Claude Shannon Invented the Future |url=https://www.quantamagazine.org/how-claude-shannons-information-theory-invented-the-future-20201222/ |access-date=2023-09-30 |website=Quanta Magazine |archive-date=2023-12-15 |archive-url=https://web.archive.org/web/20231215165308/https://www.quantamagazine.org/how-claude-shannons-information-theory-invented-the-future-20201222/ |url-status=live }}</ref> Shannon outlined some of his initial ideas of information theory as early as 1939 in a letter to [[Vannevar Bush]].<ref name=":1" /> | ||
Prior to this paper, limited information-theoretic ideas had been developed at [[Bell Labs]], all implicitly assuming events of equal probability. | Prior to this paper, limited information-theoretic ideas had been developed at [[Bell Labs]], all implicitly assuming events of equal probability. Harry Nyquist's 1924 paper, ''Certain Factors Affecting Telegraph Speed'', contains a theoretical section quantifying "intelligence" and the "line speed" at which it can be transmitted by a communication system, giving the relation {{math|1=''W'' = ''K'' log ''m''}} (recalling the [[Boltzmann constant]]), where ''W'' is the speed of transmission of intelligence, ''m'' is the number of different voltage levels to choose from at each time step, and ''K'' is a constant. Ralph Hartley's 1928 paper, ''Transmission of Information'', uses the word ''information'' as a measurable quantity, reflecting the receiver's ability to distinguish one [[sequence of symbols]] from any other, thus quantifying information as {{math|1=''H'' = log ''S''<sup>''n''</sup> = ''n'' log ''S''}}, where ''S'' was the number of possible symbols, and ''n'' the number of symbols in a transmission. The unit of information was therefore the [[decimal digit]], which since has sometimes been called the [[Hartley (unit)|hartley]] in his honor as a unit or scale or measure of information. [[Alan Turing]] in 1940 used similar ideas as part of the statistical analysis of the breaking of the German second world war [[Cryptanalysis of the Enigma|Enigma]] ciphers.{{citation needed|date=April 2024}} | ||
Much of the mathematics behind information theory with events of different probabilities were developed for the field of [[thermodynamics]] by [[Ludwig Boltzmann]] and [[J. Willard Gibbs]]. Connections between information-theoretic entropy and thermodynamic entropy, including the important contributions by [[Rolf Landauer]] in the 1960s, are explored in ''[[Entropy in thermodynamics and information theory]]''.{{citation needed|date=April 2024}} | Much of the mathematics behind information theory with events of different probabilities were developed for the field of [[thermodynamics]] by [[Ludwig Boltzmann]] and [[J. Willard Gibbs]]. Connections between information-theoretic entropy and thermodynamic entropy, including the important contributions by [[Rolf Landauer]] in the 1960s, are explored in ''[[Entropy in thermodynamics and information theory]]''.{{citation needed|date=April 2024}} | ||
In Shannon's revolutionary and groundbreaking paper, the work for which had been substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion: | In Shannon's revolutionary and groundbreaking paper, the work for which had been substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion:<ref>{{Cite journal |last=Shannon |first=Claude E. |date=July 1948 |title=A Mathematical Theory of Communication |url=http://math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf |journal=[[Bell System Technical Journal]] |volume=27 |issue=3 & 4 |pages=379–423, 623–656 |doi=10.1002/j.1538-7305.1948.tb01338.x |bibcode=1948BSTJ...27..379S |archive-date=2019-02-15 |access-date=2019-08-16 |archive-url=https://web.archive.org/web/20190215102750/http://www.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf }}</ref> | ||
:"''The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point.''" | :"''The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point.''" | ||
With it came the ideas of: | With it came the ideas of: | ||
* | * The [[information entropy]] and [[redundancy (information theory)|redundancy]] of a source, and its relevance through the [[source coding theorem]]; | ||
* | * The [[mutual information]], and the channel capacity of a noisy channel, including the promise of perfect loss-free communication given by the noisy-channel coding theorem; | ||
* | * The practical result of the [[Shannon–Hartley law]] for the channel capacity of a [[Gaussian channel]]; as well as | ||
* | * The [[bit]]—a new way of seeing the most fundamental unit of information.{{citation needed|date=April 2024}} | ||
==Quantities of information== | ==Quantities of information== | ||
{{Unreferenced section|date=April 2024}}{{Main|Quantities of information}} | {{Unreferenced section|date=April 2024}}{{Main|Quantities of information}} | ||
Information theory is based on [[probability theory]] and statistics, where [[Quantities of information|quantified information]] is usually described in terms of bits. Information theory often concerns itself with measures of information of the distributions associated with random variables. One of the most important measures is called [[Entropy (information theory)|entropy]], which forms the building block of many other measures. Entropy allows quantification of measure of information in a single random variable.<ref>{{Cite web |last=Braverman |first=Mark |date=September 19, 2011 |title=Information Theory in Computer Science |url=https://www.cs.princeton.edu/courses/archive/fall11/cos597D/L01.pdf }}</ref> Another useful concept is mutual information defined on two random variables, which describes the measure of information in common between those variables, which can be used to describe their correlation. The former quantity is a property of the probability distribution of a random variable and gives a limit on the rate at which data generated by independent samples with the given distribution can be reliably compressed. The latter is a property of the joint distribution of two random variables, and is the maximum rate of reliable communication across a noisy [[Communication channel|channel]] in the limit of long block lengths, when the channel statistics are determined by the joint distribution. | Information theory is based on [[probability theory]] and statistics, where [[Quantities of information|quantified information]] is usually described in terms of bits. Information theory often concerns itself with measures of information of the distributions associated with random variables. One of the most important measures is called [[Entropy (information theory)|entropy]], which forms the building block of many other measures. Entropy allows quantification of measure of information in a single random variable.<ref>{{Cite web |last=Braverman |first=Mark |date=September 19, 2011 |title=Information Theory in Computer Science |url=https://www.cs.princeton.edu/courses/archive/fall11/cos597D/L01.pdf |access-date=May 25, 2024 |archive-date=May 20, 2024 |archive-url=https://web.archive.org/web/20240520124543/https://www.cs.princeton.edu/courses/archive/fall11/cos597D/L01.pdf |url-status=live }}</ref> | ||
Another useful concept is mutual information defined on two random variables, which describes the measure of information in common between those variables, which can be used to describe their correlation. The former quantity is a property of the probability distribution of a random variable and gives a limit on the rate at which data generated by independent samples with the given distribution can be reliably compressed. The latter is a property of the joint distribution of two random variables, and is the maximum rate of reliable communication across a noisy [[Communication channel|channel]] in the limit of long block lengths, when the channel statistics are determined by the joint distribution. | |||
The choice of logarithmic base in the following formulae determines the [[units of measurement|unit]] of information entropy that is used. A common unit of information is the bit or [[shannon (unit)|shannon]], based on the [[binary logarithm]]. Other units include the [[nat (unit)|nat]], which is based on the [[natural logarithm]], and the [[deciban|decimal digit]], which is based on the [[common logarithm]]. | The choice of logarithmic base in the following formulae determines the [[units of measurement|unit]] of information entropy that is used. A common unit of information is the bit or [[shannon (unit)|shannon]], based on the [[binary logarithm]]. Other units include the [[nat (unit)|nat]], which is based on the [[natural logarithm]], and the [[deciban|decimal digit]], which is based on the [[common logarithm]]. | ||
In what follows, an expression of the form {{math|''p'' log ''p''}} is considered by convention to be equal to zero whenever {{math|1=''p'' = 0}}. This is justified because <math>\lim_{p \rightarrow 0+} p \log p = 0</math> for any logarithmic base. | In what follows, an expression of the form {{math|''p'' log ''p''}} is considered by convention to be equal to zero whenever {{math|1=''p'' = 0}}. This is justified because <math>\lim_{p \rightarrow 0^{+}} p \log p = 0</math> for any logarithmic base. | ||
===Entropy of an information source=== | ===Entropy of an information source=== | ||
Based on the [[probability mass function]] of | Based on the [[probability mass function]] of a source, the Shannon entropy ''H'', in units of bits per symbol, is defined as the [[expected value]] of the information content of the symbols.<ref name="CoverThomas_Ch2">{{Cite book |last1=Cover |first1=Thomas M. |last2=Thomas |first2=Joy A. |title=Elements of Information Theory |edition=2nd |date=2006 |publisher=Wiley-Interscience |isbn=978-0-471-24195-9 |page=14}}</ref><ref name="Arndt_InfoTheory">{{Cite book |last=Arndt |first=C. |title=Information Measures: Information and its Description in Science and Engineering |date=2004 |publisher=Springer |isbn=978-3-540-40855-0 |page=5}}</ref> | ||
The amount of information conveyed by an individual source symbol <math>x_{i}</math> with probability <math>p_{i}</math> is known as its '''[[Information content|self-information]]''' or '''surprisal''', <math>I(p_{i})</math>. This quantity is defined as:<ref name="MacKay_Ch2">{{Cite book |last=MacKay |first=David J.C. |title=Information Theory, Inference and Learning Algorithms |date=2003 |publisher=Cambridge University Press |isbn=978-0-521-64298-9 |page=29}}</ref><ref name="Carter_Entropy">{{Cite web |url=http://www.mdpi.com/1099-4300/11/3/388/pdf |title="Entropy" at the Clausius-Shannon Interface |last=Carter |first=Tom |date=2009 |website=Entropy |volume=11 |issue=3 |pages=394–438 |doi=10.3390/e11030388 |doi-broken-date=6 July 2025 |doi-access=free |access-date=2025-07-06}}</ref> | |||
:<math>I(p_i) = -\log_2(p_i)</math> | |||
The entropy of | A less probable symbol has a larger surprisal, meaning its occurrence provides more information.<ref name="MacKay_Ch2"/> The entropy <math>H</math> is the weighted average of the surprisal of all possible symbols from the source's probability distribution:<ref name="Reza_Ch3">{{Cite book |last=Reza |first=Fazlollah M. |title=An Introduction to Information Theory |date=1994 |publisher=Dover Publications |isbn=978-0-486-68210-5 |page=66}}</ref><ref name="Stone_InfoTheory">{{Cite book |last=Stone |first=James V. |title=Information Theory: A Tutorial Introduction |date=2015 |publisher=Sebtel Press |isbn=978-0-9563728-5-7 |page=7}}</ref> | ||
:<math>H(X) \ = \ \mathbb{E}_{X}[I(x)] \ = \ \sum_{i} p_i I(p_i) \ = \ -\sum_{i} p_i \log_2(p_i)</math> | |||
Intuitively, the entropy <math>H(X)</math> of a [[discrete random variable]] {{math|''X''}} is a measure of the amount of ''uncertainty'' associated with the value of <math>X</math> when only its distribution is known.<ref name="CoverThomas_Ch2"/> A high entropy indicates the outcomes are more evenly distributed, making the result harder to predict.<ref name="Schneider_InfoTheoryPrimer">{{Cite web |url=http://www.lecb.ncifcrf.gov/~toms/paper/primer/ |title=Information Theory Primer |last=Schneider |first=Tom D. |date=2007-03-22 |access-date=2025-07-06 |archive-date=2012-08-29 |archive-url=https://web.archive.org/web/20120829010323/http://www.lecb.ncifcrf.gov/~toms/paper/primer/ }}</ref> | |||
For example, if one transmits 1000 bits (0s and 1s), and the value of each of these bits is known to the receiver (has a specific value with certainty) ahead of transmission, no information is transmitted. If, however, each bit is independently and equally likely to be 0 or 1, 1000 shannons of information (more often called bits) have been transmitted.<ref name="CoverThomas_Ch1">{{Cite book |last1=Cover |first1=Thomas M. |last2=Thomas |first2=Joy A. |title=Elements of Information Theory |edition=2nd |date=2006 |publisher=Wiley-Interscience |isbn=978-0-471-24195-9 |page=1}}</ref> | |||
[[File:Binary entropy plot.svg|thumbnail|right|200px|The entropy of a [[Bernoulli trial]] as a function of success probability, often called the {{em|[[binary entropy function]]}}, {{math|''H''<sub>b</sub>(''p'')}}. The entropy is maximized at 1 bit per trial when the two possible outcomes are equally probable, as in an unbiased coin toss.]] | [[File:Binary entropy plot.svg|thumbnail|right|200px|The entropy of a [[Bernoulli trial]] as a function of success probability, often called the {{em|[[binary entropy function]]}}, {{math|''H''<sub>b</sub>(''p'')}}. The entropy is maximized at 1 bit per trial when the two possible outcomes are equally probable, as in an unbiased coin toss.]] | ||
==== Properties ==== | |||
A key property of entropy is that it is maximized when all the messages in the message space are [[equiprobable]]. For a source with {{mvar|n}} possible symbols, where <math display="inline">p_{i} = \frac{1}{n}</math> for all <math>i</math>, the entropy is given by:<ref name="CoverThomas_Ch2_Theorem2.6.2">{{Cite book |last1=Cover |first1=Thomas M. |last2=Thomas |first2=Joy A. |title=Elements of Information Theory |edition=2nd |date=2006 |publisher=Wiley-Interscience |isbn=978-0-471-24195-9 |page=27 |quote=Theorem 2.6.2: ''H''(''X'') ≤ log{{!}}''X''{{!}}, with equality if and only if ''X'' has a uniform distribution over ''X''.}}</ref> | |||
:<math>H(X) = \log_2(n)</math> | |||
This maximum value represents the most unpredictable state.<ref name="Reza_Ch3"/> | |||
For a source that emits a sequence of <math>N</math> symbols that are [[independent and identically distributed]] (i.i.d.), the total entropy of the message is <math>N \cdot H</math> bits. If the source data symbols are identically distributed but not independent, the entropy of a message of length <math>N</math> will be less than <math>N \cdot H</math>.<ref name="CoverThomas_Ch2_Asymptotic">{{Cite book |last1=Cover |first1=Thomas M. |last2=Thomas |first2=Joy A. |title=Elements of Information Theory |edition=2nd |date=2006 |publisher=Wiley-Interscience |isbn=978-0-471-24195-9 |page=63}}</ref><ref name="MacKay_Ch4">{{Cite book |last=MacKay |first=David J.C. |title=Information Theory, Inference and Learning Algorithms |date=2003 |publisher=Cambridge University Press |isbn=978-0-521-64298-9 |page=75}}</ref> | |||
==== Units ==== | |||
The choice of the logarithmic base in the entropy formula determines the unit of entropy used:<ref name="MacKay_Ch2"/><ref name="Reza_Ch3"/> | |||
* A '''[[base-2 logarithm]]''' (as shown in the main formula) measures entropy in '''[[Bit|bits]]''' per symbol. This unit is also sometimes called the '''[[Shannon (unit)|shannon]]''' in honor of Claude Shannon.<ref name="CoverThomas_Ch2" /> | |||
* A '''[[Natural logarithm]]''' (base ''[[e (mathematical constant)|e]]'') measures entropy in '''[[Nat (unit)|nats]]''' per symbol. This is often used in theoretical analysis as it avoids the need for scaling constants (like ln 2) in derivations.<ref name="Leung_Nats">{{Cite web |url=https://www.stat.purdue.edu/~kyle/InformationTheory.pdf |title=Information Theory and the Central Limit Theorem |last=Leung |first=K. |date=2011 |website=Purdue University |access-date=2025-07-06 |quote=The unit of information is determined by the base of the logarithm. If the base is 2, the unit is bits. If the base is e, the unit is nats.}}</ref> | |||
* Other bases are also possible. A '''base-10 logarithm''' measures entropy in decimal digits, or '''[[Hartley (unit)|hartleys]]''', per symbol.<ref name="Arndt_InfoTheory" /> A '''base-256 logarithm''' measures entropy in '''[[Byte|bytes]]''' per symbol, since {{nowrap|1=2<sup>8</sup> = 256}}.<ref name="MacKay_Ch2_Bytes">{{Cite book |last=MacKay |first=David J.C. |title=Information Theory, Inference and Learning Algorithms |date=2003 |publisher=Cambridge University Press |isbn=978-0-521-64298-9 |page=30 |quote=If we use logarithms to the base ''b'', we are measuring information in units of log<sub>''b''</sub> 2 bits. We can call a quantity of information of log<sub>2</sub> 256 = 8 bits one byte.}}</ref> | |||
==== Binary Entropy Function ==== | |||
( | The special case of information entropy for a random variable with two outcomes (a [[Bernoulli trial]]) is the '''binary entropy function'''. This is typically calculated using a base-2 logarithm, and its unit is the [[Shannon (unit)|shannon]].<ref name="CoverThomas_Binary">{{Cite book |last1=Cover |first1=Thomas M. |title=Elements of Information Theory |last2=Thomas |first2=Joy A. |date=2006 |publisher=Wiley-Interscience |isbn=978-0-471-24195-9 |edition=2nd |page=15}}</ref> If one outcome has probability {{mvar|p}}, the other has probability {{math|''1'' − ''p''}}. The entropy is given by:<ref name="MacKay_Binary">{{Cite book |last=MacKay |first=David J.C. |title=Information Theory, Inference and Learning Algorithms |date=2003 |publisher=Cambridge University Press |isbn=978-0-521-64298-9 |page=145}}</ref> | ||
:<math>H_{\mathrm{b}}(p) = -p \log_2 p - (1-p)\log_2 (1-p)</math> | |||
This function is depicted in the plot shown above, reaching its maximum of 1 bit when {{math|1=''p'' = 0.5}}, corresponding to the highest uncertainty. | |||
===Joint entropy=== | ===Joint entropy=== | ||
| Line 90: | Line 108: | ||
where {{math|SI}} (''S''pecific mutual Information) is the [[pointwise mutual information]]. | where {{math|SI}} (''S''pecific mutual Information) is the [[pointwise mutual information]]. | ||
A basic property of the mutual information is that | A basic property of the mutual information is that: | ||
: <math>I(X;Y) = H(X) - H(X|Y).\,</math> | : <math>I(X;Y) = H(X) - H(X|Y).\,</math> | ||
That is, knowing | That is, knowing <math display="inline">Y</math>, we can save an average of {{math|''I''(''X''; ''Y'')}} bits in encoding ''<math>X</math>'' compared to not knowing <math>Y</math>. | ||
Mutual information is [[symmetric function|symmetric]]: | Mutual information is [[symmetric function|symmetric]]: | ||
: <math>I(X;Y) = I(Y;X) = H(X) + H(Y) - H(X,Y).\,</math> | : <math>I(X;Y) = I(Y;X) = H(X) + H(Y) - H(X,Y).\,</math> | ||
Mutual information can be expressed as the average Kullback–Leibler divergence (information gain) between the [[posterior probability|posterior probability distribution]] of ''X'' given the value of | Mutual information can be expressed as the average Kullback–Leibler divergence (information gain) between the [[posterior probability|posterior probability distribution]] of ''<math>X</math>'' given the value of <math display="inline">Y</math> and the [[prior probability|prior distribution]] on <math>X</math>: | ||
: <math>I(X;Y) = \mathbb E_{p(y)} [D_{\mathrm{KL}}( p(X|Y=y) \| p(X) )].</math> | : <math>I(X;Y) = \mathbb E_{p(y)} [D_{\mathrm{KL}}( p(X|Y=y) \| p(X) )].</math> | ||
In other words, this is a measure of how much, on the average, the probability distribution on | In other words, this is a measure of how much, on the average, the probability distribution on <math>X</math> will change if we are given the value of ''<math display="inline">Y</math>''. This is often recalculated as the divergence from the product of the marginal distributions to the actual joint distribution: | ||
: <math>I(X; Y) = D_{\mathrm{KL}}(p(X,Y) \| p(X)p(Y)).</math> | : <math>I(X; Y) = D_{\mathrm{KL}}(p(X,Y) \| p(X)p(Y)).</math> | ||
| Line 111: | Line 129: | ||
Although it is sometimes used as a 'distance metric', KL divergence is not a true [[Metric (mathematics)|metric]] since it is not symmetric and does not satisfy the [[triangle inequality]] (making it a semi-quasimetric). | Although it is sometimes used as a 'distance metric', KL divergence is not a true [[Metric (mathematics)|metric]] since it is not symmetric and does not satisfy the [[triangle inequality]] (making it a semi-quasimetric). | ||
Another interpretation of the KL divergence is the "unnecessary surprise" introduced by a prior from the truth: suppose a number ''X'' is about to be drawn randomly from a discrete set with probability distribution {{tmath|p(x)}}. If Alice knows the true distribution {{tmath|p(x)}}, while Bob believes (has a [[prior probability|prior]]) that the distribution is {{tmath|q(x)}}, then Bob will be more [[Information content|surprised]] than Alice, on average, upon seeing the value of | Another interpretation of the KL divergence is the "unnecessary surprise" introduced by a prior from the truth: suppose a number ''<math>X</math>'' is about to be drawn randomly from a discrete set with probability distribution {{tmath|p(x)}}. If Alice knows the true distribution {{tmath|p(x)}}, while Bob believes (has a [[prior probability|prior]]) that the distribution is {{tmath|q(x)}}, then Bob will be more [[Information content|surprised]] than Alice, on average, upon seeing the value of <math>X</math>. The KL divergence is the (objective) expected value of Bob's (subjective) [[Information content|surprisal]] minus Alice's surprisal, measured in bits if the ''log'' is in base 2. In this way, the extent to which Bob's prior is "wrong" can be quantified in terms of how "unnecessarily surprised" it is expected to make him. | ||
===Directed Information=== | ===Directed Information=== | ||
''[[Directed information]]'', <math>I(X^n\to Y^n) </math>, is an information theory measure that quantifies the [[information]] flow from the random process <math>X^n = \{X_1,X_2,\dots,X_n\}</math> to the random process <math>Y^n = \{Y_1,Y_2,\dots,Y_n\}</math>. The term ''directed information'' was coined by [[James Massey]] and is defined as | ''[[Directed information]]'', <math>I(X^n\to Y^n) </math>, is an information theory measure that quantifies the [[information]] flow from the random process <math>X^n = \{X_1,X_2,\dots,X_n\}</math> to the random process <math>Y^n = \{Y_1,Y_2,\dots,Y_n\}</math>. The term ''directed information'' was coined by [[James Massey]] and is defined as: | ||
:<math>I(X^n\to Y^n) \triangleq \sum_{i=1}^n I(X^i;Y_i|Y^{i-1})</math>, | :<math>I(X^n\to Y^n) \ \triangleq \ \sum_{i=1}^n I(X^i;Y_i|Y^{i-1})</math>, | ||
where <math>I(X^{i};Y_i|Y^{i-1})</math> is the [[conditional mutual information]] <math>I(X_1,X_2,...,X_{i};Y_i|Y_1,Y_2,...,Y_{i-1})</math>. | where <math>I(X^{i};Y_i|Y^{i-1})</math> is the [[conditional mutual information]] <math>I(X_1,X_2,...,X_{i};Y_i|Y_1,Y_2,...,Y_{i-1})</math>. | ||
In contrast to ''mutual'' information, ''directed'' information is not symmetric. The <math>I(X^n\to Y^n) </math> measures the information bits that are transmitted causally{{clarify|date=November 2024|reason=definition of causal transmission?}} from <math>X^n</math> to <math>Y^n</math>. The Directed information has many applications in problems where [[causality]] plays an important role such as [[channel capacity|capacity of channel]] with feedback,<ref name=massey>{{citation |last1=Massey|first1=James|contribution=Causality, Feedback And Directed Information|date=1990|title=Proc. 1990 Intl. Symp. on Info. Th. and its Applications|citeseerx=10.1.1.36.5688}}</ref><ref>{{cite journal|last1=Permuter|first1=Haim Henry|last2=Weissman|first2=Tsachy|last3=Goldsmith|first3=Andrea J.|title=Finite State Channels With Time-Invariant Deterministic Feedback|journal=IEEE Transactions on Information Theory|date=February 2009|volume=55|issue=2|pages=644–662|doi=10.1109/TIT.2008.2009849|arxiv=cs/0608070|s2cid=13178}}</ref> capacity of discrete [[memoryless]] networks with feedback,<ref>{{cite journal|last1=Kramer|first1=G.|title=Capacity results for the discrete memoryless network|journal=IEEE Transactions on Information Theory|date=January 2003|volume=49|issue=1|pages=4–21|doi=10.1109/TIT.2002.806135}}</ref> [[sports gambling|gambling]] with causal side information,<ref>{{cite journal|last1=Permuter|first1=Haim H.|last2=Kim|first2=Young-Han|last3=Weissman|first3=Tsachy|title=Interpretations of Directed Information in Portfolio Theory, Data Compression, and Hypothesis Testing|journal=IEEE Transactions on Information Theory|date=June 2011|volume=57|issue=6|pages=3248–3259|doi=10.1109/TIT.2011.2136270|arxiv=0912.4872|s2cid=11722596}}</ref> [[Data compression|compression]] with causal side information,<ref>{{cite journal|last1=Simeone|first1=Osvaldo|last2=Permuter|first2=Haim Henri|title=Source Coding When the Side Information May Be Delayed|journal=IEEE Transactions on Information Theory|date=June 2013|volume=59|issue=6|pages=3607–3618|doi=10.1109/TIT.2013.2248192|arxiv=1109.1293|s2cid=3211485}}</ref> | In contrast to ''mutual'' information, ''directed'' information is not symmetric. The <math>I(X^n\to Y^n) </math> measures the information bits that are transmitted causally{{clarify|date=November 2024|reason=definition of causal transmission?}} from <math>X^n</math> to <math>Y^n</math>. The Directed information has many applications in problems where [[causality]] plays an important role such as [[channel capacity|capacity of channel]] with feedback,<ref name=massey>{{citation |last1=Massey|first1=James|contribution=Causality, Feedback And Directed Information|date=1990|title=Proc. 1990 Intl. Symp. on Info. Th. and its Applications|citeseerx=10.1.1.36.5688}}</ref><ref>{{cite journal|last1=Permuter|first1=Haim Henry|last2=Weissman|first2=Tsachy|last3=Goldsmith|first3=Andrea J.|title=Finite State Channels With Time-Invariant Deterministic Feedback|journal=IEEE Transactions on Information Theory|date=February 2009|volume=55|issue=2|pages=644–662|doi=10.1109/TIT.2008.2009849|arxiv=cs/0608070|bibcode=2009ITIT...55..644P |s2cid=13178}}</ref> capacity of discrete [[memoryless]] networks with feedback,<ref>{{cite journal|last1=Kramer|first1=G.|title=Capacity results for the discrete memoryless network|journal=IEEE Transactions on Information Theory|date=January 2003|volume=49|issue=1|pages=4–21|doi=10.1109/TIT.2002.806135 |bibcode=2003ITIT...49....4K }}</ref> [[sports gambling|gambling]] with causal side information,<ref>{{cite journal|last1=Permuter|first1=Haim H.|last2=Kim|first2=Young-Han|last3=Weissman|first3=Tsachy|title=Interpretations of Directed Information in Portfolio Theory, Data Compression, and Hypothesis Testing|journal=IEEE Transactions on Information Theory|date=June 2011|volume=57|issue=6|pages=3248–3259|doi=10.1109/TIT.2011.2136270|arxiv=0912.4872|bibcode=2011ITIT...57.3248P |s2cid=11722596}}</ref> [[Data compression|compression]] with causal side information,<ref>{{cite journal|last1=Simeone|first1=Osvaldo|last2=Permuter|first2=Haim Henri|title=Source Coding When the Side Information May Be Delayed|journal=IEEE Transactions on Information Theory|date=June 2013|volume=59|issue=6|pages=3607–3618|doi=10.1109/TIT.2013.2248192|arxiv=1109.1293|bibcode=2013ITIT...59.3607S |s2cid=3211485}}</ref> | ||
[[real-time control]] communication settings,<ref>{{cite journal|last1=Charalambous|first1=Charalambos D.|last2=Stavrou|first2=Photios A.|title=Directed Information on Abstract Spaces: Properties and Variational Equalities|journal=IEEE Transactions on Information Theory|date=August 2016|volume=62|issue=11|pages=6019–6052|doi=10.1109/TIT.2016.2604846|arxiv=1302.3971|s2cid=8107565}}</ref><ref>{{cite journal |last1=Tanaka |first1=Takashi |last2=Esfahani |first2=Peyman Mohajerin |last3=Mitter |first3=Sanjoy K. |title=LQG Control With Minimum Directed Information: Semidefinite Programming Approach |journal=IEEE Transactions on Automatic Control |date=January 2018 |volume=63 |issue=1 |pages=37–52 |doi=10.1109/TAC.2017.2709618|s2cid=1401958 |url=http://resolver.tudelft.nl/uuid:d9db1c11-fbfd-4c0c-b66f-f341b49fa61a |arxiv=1510.04214 |via=TU Delft Repositories |url-status=live |archive-url=https://web.archive.org/web/20240412014101/https://repository.tudelft.nl/islandora/object/uuid:d9db1c11-fbfd-4c0c-b66f-f341b49fa61a/datastream/OBJ/download |archive-date= Apr 12, 2024 }}</ref> and in statistical physics.<ref>{{cite journal |last1=Vinkler |first1=Dror A |last2=Permuter |first2=Haim H |last3=Merhav |first3=Neri |title=Analogy between gambling and measurement-based work extraction |journal=Journal of Statistical Mechanics: Theory and Experiment |date=20 April 2016 |volume=2016 |issue=4 | | [[real-time control]] communication settings,<ref>{{cite journal|last1=Charalambous|first1=Charalambos D.|last2=Stavrou|first2=Photios A.|title=Directed Information on Abstract Spaces: Properties and Variational Equalities|journal=IEEE Transactions on Information Theory|date=August 2016|volume=62|issue=11|pages=6019–6052|doi=10.1109/TIT.2016.2604846|arxiv=1302.3971|bibcode=2016ITIT...62.6019C |s2cid=8107565}}</ref><ref>{{cite journal |last1=Tanaka |first1=Takashi |last2=Esfahani |first2=Peyman Mohajerin |last3=Mitter |first3=Sanjoy K. |title=LQG Control With Minimum Directed Information: Semidefinite Programming Approach |journal=IEEE Transactions on Automatic Control |date=January 2018 |volume=63 |issue=1 |pages=37–52 |doi=10.1109/TAC.2017.2709618|s2cid=1401958 |url=http://resolver.tudelft.nl/uuid:d9db1c11-fbfd-4c0c-b66f-f341b49fa61a |arxiv=1510.04214 |bibcode=2018ITAC...63...37T |via=TU Delft Repositories |url-status=live |archive-url=https://web.archive.org/web/20240412014101/https://repository.tudelft.nl/islandora/object/uuid:d9db1c11-fbfd-4c0c-b66f-f341b49fa61a/datastream/OBJ/download |archive-date= Apr 12, 2024 }}</ref> and in statistical physics.<ref>{{cite journal |last1=Vinkler |first1=Dror A |last2=Permuter |first2=Haim H |last3=Merhav |first3=Neri |title=Analogy between gambling and measurement-based work extraction |journal=Journal of Statistical Mechanics: Theory and Experiment |date=20 April 2016 |volume=2016 |issue=4 |article-number=043403 |doi=10.1088/1742-5468/2016/04/043403|arxiv=1404.6788 |bibcode=2016JSMTE..04.3403V |s2cid=124719237 }}</ref> | ||
===Other quantities=== | ===Other quantities=== | ||
| Line 132: | Line 150: | ||
* [[Data compression]] (source coding): There are two formulations for the compression problem: | * [[Data compression]] (source coding): There are two formulations for the compression problem: | ||
** [[ | ** [[Lossless data compression]]: the data must be reconstructed exactly; | ||
** [[ | ** [[Lossy data compression]]: allocates bits needed to reconstruct the data, within a specified fidelity level measured by a distortion function. This subset of information theory is called ''[[rate–distortion theory]]''. | ||
* [[Error-correcting code]]s (channel coding): While data compression removes as much redundancy as possible, an error-correcting code adds just the right kind of redundancy (i.e., error correction) needed to transmit the data efficiently and faithfully across a noisy channel. | * [[Error-correcting code]]s (channel coding): While data compression removes as much redundancy as possible, an error-correcting code adds just the right kind of redundancy (i.e., error correction) needed to transmit the data efficiently and faithfully across a noisy channel. | ||
| Line 168: | Line 186: | ||
\begin{array}{ |c| }\hline \text{Encoder} \\ f_n \\ \hline\end{array} \xrightarrow[\mathrm{Encoded \atop sequence}]{X^n} \begin{array}{ |c| }\hline \text{Channel} \\ p(y|x) \\ \hline\end{array} \xrightarrow[\mathrm{Received \atop sequence}]{Y^n} \begin{array}{ |c| }\hline \text{Decoder} \\ g_n \\ \hline\end{array} \xrightarrow[\mathrm{Estimated \atop message}]{\hat W}</math> | \begin{array}{ |c| }\hline \text{Encoder} \\ f_n \\ \hline\end{array} \xrightarrow[\mathrm{Encoded \atop sequence}]{X^n} \begin{array}{ |c| }\hline \text{Channel} \\ p(y|x) \\ \hline\end{array} \xrightarrow[\mathrm{Received \atop sequence}]{Y^n} \begin{array}{ |c| }\hline \text{Decoder} \\ g_n \\ \hline\end{array} \xrightarrow[\mathrm{Estimated \atop message}]{\hat W}</math> | ||
Here | Here <math>X</math> represents the space of messages transmitted, and <math display="inline">Y</math> the space of messages received during a unit time over our channel. Let {{math|''p''(''y''{{pipe}}''x'')}} be the [[conditional probability]] distribution function of ''<math display="inline">Y</math>'' given <math>X</math>. We will consider {{math|''p''(''y''{{pipe}}''x'')}} to be an inherent fixed property of our communications channel (representing the nature of the ''[[Signal noise|noise]]'' of our channel). Then the joint distribution of ''<math>X</math>'' and ''<math display="inline">Y</math>'' is completely determined by our channel and by our choice of {{math|''f''(''x'')}}, the marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximize the rate of information, or the ''[[Signal (electrical engineering)|signal]]'', we can communicate over the channel. The appropriate measure for this is the mutual information, and this maximum mutual information is called the {{em|channel capacity}} and is given by: | ||
:<math> C = \max_{f} I(X;Y).\! </math> | :<math> C = \max_{f} I(X;Y).\! </math> | ||
This capacity has the following property related to communicating at information rate ''R'' (where ''R'' is usually bits per symbol). For any information rate ''R'' < ''C'' and coding error ''ε'' > 0, for large enough ''N'', there exists a code of length ''N'' and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ''ε''; that is, it is always possible to transmit with arbitrarily small block error. In addition, for any rate ''R'' > ''C'', it is impossible to transmit with arbitrarily small block error. | This capacity has the following property related to communicating at information rate ''R'' (where ''R'' is usually bits per symbol). For any information rate ''R'' < ''C'' and coding error ''ε'' > 0, for large enough ''N'', there exists a code of length ''N'' and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ''ε''; that is, it is always possible to transmit with arbitrarily small block error. In addition, for any rate ''R'' > ''C'', it is impossible to transmit with arbitrarily small block error. | ||
| Line 187: | Line 205: | ||
In practice many channels have memory. Namely, at time <math> i </math> the channel is given by the conditional probability<math> P(y_i|x_i,x_{i-1},x_{i-2},...,x_1,y_{i-1},y_{i-2},...,y_1) </math>. | In practice many channels have memory. Namely, at time <math> i </math> the channel is given by the conditional probability<math> P(y_i|x_i,x_{i-1},x_{i-2},...,x_1,y_{i-1},y_{i-2},...,y_1) </math>. | ||
It is often more comfortable to use the notation <math> x^i=(x_i,x_{i-1},x_{i-2},...,x_1) </math> and the channel become <math> P(y_i|x^i,y^{i-1}) </math>. | It is often more comfortable to use the notation <math> x^i=(x_i,x_{i-1},x_{i-2},...,x_1) </math> and the channel become <math> P(y_i|x^i,y^{i-1}) </math>. | ||
In such a case the capacity is given by the [[mutual information]] rate when there is no feedback available and the [[Directed information]] rate in the case that either there is feedback or not<ref name=massey/><ref>{{cite journal |last1=Permuter |first1=Haim Henry |last2=Weissman |first2=Tsachy |last3=Goldsmith |first3=Andrea J. |title=Finite State Channels With Time-Invariant Deterministic Feedback |journal=IEEE Transactions on Information Theory |date=February 2009 |volume=55 |issue=2 |pages=644–662 |doi=10.1109/TIT.2008.2009849|arxiv=cs/0608070 |s2cid=13178 }}</ref> (if there is no feedback the directed information equals the mutual information). | In such a case the capacity is given by the [[mutual information]] rate when there is no feedback available and the [[Directed information]] rate in the case that either there is feedback or not<ref name=massey/><ref>{{cite journal |last1=Permuter |first1=Haim Henry |last2=Weissman |first2=Tsachy |last3=Goldsmith |first3=Andrea J. |title=Finite State Channels With Time-Invariant Deterministic Feedback |journal=IEEE Transactions on Information Theory |date=February 2009 |volume=55 |issue=2 |pages=644–662 |doi=10.1109/TIT.2008.2009849|arxiv=cs/0608070 |bibcode=2009ITIT...55..644P |s2cid=13178 }}</ref> (if there is no feedback the directed information equals the mutual information). | ||
===Fungible information=== | ===Fungible information=== | ||
| Line 210: | Line 228: | ||
===Semiotics=== | ===Semiotics=== | ||
[[Semiotics|Semioticians]] {{ill|Doede Nauta|nl}} and [[Winfried Nöth]] both considered [[Charles Sanders Peirce]] as having created a theory of information in his works on semiotics.<ref name="Nauta 1972">{{cite book |last1=Nauta |first1=Doede |title=The Meaning of Information |date=1972 |publisher=Mouton |location=The Hague |isbn= | [[Semiotics|Semioticians]] {{ill|Doede Nauta|nl}} and [[Winfried Nöth]] both considered [[Charles Sanders Peirce]] as having created a theory of information in his works on semiotics.<ref name="Nauta 1972">{{cite book |last1=Nauta |first1=Doede |title=The Meaning of Information |date=1972 |publisher=Mouton |location=The Hague |isbn=978-90-279-1996-0}}</ref>{{rp|171}}<ref name="Nöth 2012">{{cite journal |last1=Nöth |first1=Winfried |title=Charles S. Peirce's theory of information: a theory of the growth of symbols and of knowledge |journal=Cybernetics and Human Knowing |date=January 2012 |volume=19 |issue=1–2 |pages=137–161 |url=https://edisciplinas.usp.br/mod/resource/view.php?id=2311849}}</ref>{{rp|137}} Nauta defined semiotic information theory as the study of "''the internal processes of coding, filtering, and information processing.''"<ref name="Nauta 1972"/>{{rp|91}} | ||
Concepts from information theory such as redundancy and code control have been used by semioticians such as [[Umberto Eco]] and {{ill|Ferruccio Rossi-Landi|it}} to explain ideology as a form of message transmission whereby a dominant social class emits its message by using signs that exhibit a high degree of redundancy such that only one message is decoded among a selection of competing ones.<ref>Nöth, Winfried (1981). "[https://kobra.uni-kassel.de/bitstream/handle/123456789/2014122246977/semi_2004_002.pdf?sequence=1&isAllowed=y Semiotics of ideology]". ''Semiotica'', Issue 148.</ref> | Concepts from information theory such as redundancy and code control have been used by semioticians such as [[Umberto Eco]] and {{ill|Ferruccio Rossi-Landi|it}} to explain ideology as a form of message transmission whereby a dominant social class emits its message by using signs that exhibit a high degree of redundancy such that only one message is decoded among a selection of competing ones.<ref>Nöth, Winfried (1981). "[https://kobra.uni-kassel.de/bitstream/handle/123456789/2014122246977/semi_2004_002.pdf?sequence=1&isAllowed=y Semiotics of ideology]". ''Semiotica'', Issue 148.</ref> | ||
===Integrated process organization of neural information=== | ===Integrated process organization of neural information=== | ||
Quantitative information theoretic methods have been applied in [[cognitive science]] to analyze the integrated process organization of neural information in the context of the [[binding problem]] in [[cognitive neuroscience]].<ref>{{cite book|last=Maurer|first=H.|year=2021|title=Cognitive Science: Integrative Synchronization Mechanisms in Cognitive Neuroarchitectures of the Modern Connectionism|language=en|publisher=CRC Press|location=Boca Raton/FL|chapter=Chapter 10: Systematic Class of Information Based Architecture Types|isbn=978-1-351-04352-6|doi=10.1201/9781351043526}}</ref> In this context, either an information-theoretical measure, such as {{em|functional clusters}} ([[Gerald Edelman]] and [[Giulio Tononi]]'s functional clustering model and dynamic core hypothesis (DCH)<ref>{{cite book|last1=Edelman|first1=G.M.|first2=G.|last2=Tononi|year=2000|title=A Universe of Consciousness: How Matter Becomes Imagination|language=en|publisher=Basic Books|location=New York|isbn=978- | Quantitative information theoretic methods have been applied in [[cognitive science]] to analyze the integrated process organization of neural information in the context of the [[binding problem]] in [[cognitive neuroscience]].<ref>{{cite book|last=Maurer|first=H.|year=2021|title=Cognitive Science: Integrative Synchronization Mechanisms in Cognitive Neuroarchitectures of the Modern Connectionism|language=en|publisher=CRC Press|location=Boca Raton/FL|chapter=Chapter 10: Systematic Class of Information Based Architecture Types|isbn=978-1-351-04352-6|doi=10.1201/9781351043526}}</ref> In this context, either an information-theoretical measure, such as {{em|functional clusters}} ([[Gerald Edelman]] and [[Giulio Tononi]]'s functional clustering model and dynamic core hypothesis (DCH)<ref>{{cite book|last1=Edelman|first1=G.M.|first2=G.|last2=Tononi|year=2000|title=A Universe of Consciousness: How Matter Becomes Imagination|language=en|publisher=Basic Books|location=New York|isbn=978-0-465-01377-7}}</ref>) or {{em|effective information}} (Tononi's [[integrated information theory]] (IIT) of consciousness<ref>{{cite journal|last1=Tononi|first1=G.|first2=O.|last2=Sporns|year=2003|title=Measuring information integration|journal=BMC Neuroscience|language=en|volume=4|pages=1–20|article-number=31 |doi=10.1186/1471-2202-4-31|doi-access=free |pmid=14641936 |pmc=331407 }}</ref><ref>{{cite journal|last=Tononi|first=G.|year=2004a|title=An information integration theory of consciousness|journal=BMC Neuroscience|language=en|volume=5|pages=1–22|article-number=42 |doi=10.1186/1471-2202-5-42|doi-access=free |pmid=15522121 |pmc=543470 }}</ref><ref>{{cite book|last=Tononi|first=G.|year=2004b|chapter=Consciousness and the brain: theoretical aspects|editor1-first=G.|editor1-last=Adelman|editor2-first=B.|editor2-last=Smith|title=Encyclopedia of Neuroscience|language=en|edition=3rd|publisher=Elsevier|location=Amsterdam, Oxford|chapter-url=https://www.researchgate.net/publication/265238140|archive-url=https://web.archive.org/web/20231202031406/https://www.jsmf.org/meetings/2003/nov/consciousness_encyclopedia_2003.pdf|archive-date=2023-12-02|isbn=0-444-51432-5}}</ref>), is defined (on the basis of a reentrant process organization, i.e. the synchronization of neurophysiological activity between groups of neuronal populations), or the measure of the minimization of free energy on the basis of statistical methods ([[Karl J. Friston]]'s [[free energy principle]] (FEP), an information-theoretical measure which states that every adaptive change in a self-organized system leads to a minimization of free energy, and the [[Bayesian brain]] hypothesis<ref>{{cite journal|last1=Friston|first1=K.|first2=K.E.|last2=Stephan|year=2007|title=Free-energy and the brain|journal=Synthese|language=en|volume=159|issue=3 |pages=417–458|doi=10.1007/s11229-007-9237-y|pmid=19325932 |pmc=2660582 }}</ref><ref>{{cite journal|last=Friston|first=K.|year=2010|title=The free-energy principle: a unified brain theory|journal=Nature Reviews Neuroscience|language=en|volume=11|issue=2 |pages=127–138|doi=10.1038/nrn2787|pmid=20068583 }}</ref><ref>{{cite journal|last1=Friston|first1=K.|first2=M.|last2=Breakstear|first3=G.|last3=Deco|year=2012|title=Perception and self-organized instability|journal=Frontiers in Computational Neuroscience|language=en|volume=6|pages=1–19|doi=10.3389/fncom.2012.00044|doi-access=free |pmid=22783185 |pmc=3390798 }}</ref><ref>{{cite journal|last=Friston|first=K.|year=2013|title=Life as we know it|journal=Journal of the Royal Society Interface|language=en|volume=10|issue=86 |article-number=20130475|doi=10.1098/rsif.2013.0475|pmid=23825119 |pmc=3730701 }}</ref><ref>{{cite journal|last1=Kirchhoff|first1=M.|first2=T.|last2=Parr|first3=E.|last3=Palacios|first4=K.|last4=Friston|first5=J.|last5=Kiverstein|year=2018|title=The Markov blankets of life: autonomy, active inference and the free energy principle|journal=Journal of the Royal Society Interface|language=en|volume=15|issue=138 |article-number=20170792|doi=10.1098/rsif.2017.0792|pmid=29343629 |pmc=5805980 }}</ref>). | ||
===Miscellaneous applications=== | ===Miscellaneous applications=== | ||
Information theory also has applications in the [[search for extraterrestrial intelligence]],<ref>{{Cite journal |last1=Doyle |first1=Laurance R. |author-link=Laurance Doyle |last2=McCowan |first2=Brenda |author-link2=Brenda McCowan |last3=Johnston |first3=Simon |last4=Hanser |first4=Sean F. |date=February 2011 |title=Information theory, animal communication, and the search for extraterrestrial intelligence |journal=[[Acta Astronautica]] |language=en |volume=68 |issue=3–4 |pages=406–417 |doi=10.1016/j.actaastro.2009.11.018|bibcode=2011AcAau..68..406D }}</ref> [[black hole information paradox|black holes]],<ref>{{Cite journal |last=Bekenstein |first=Jacob D |date=2004 |title=Black holes and information theory |url=https://www.tandfonline.com/doi/abs/10.1080/00107510310001632523 |journal=Contemporary Physics |volume=45 |issue=1 |pages=31–43 |doi=10.1080/00107510310001632523 |arxiv=quant-ph/0311049 |bibcode=2004ConPh..45...31B |issn=0010-7514}}</ref> [[bioinformatics]],<ref>{{Cite journal |last=Vinga |first=Susana |date=2014-05-01 |title=Information theory applications for biological sequence analysis |url=https://academic.oup.com/bib/article/15/3/376/183705 |journal=Briefings in Bioinformatics |volume=15 |issue=3 |pages=376–389 |doi=10.1093/bib/bbt068 |issn=1467-5463 |pmc=7109941 |pmid=24058049}}</ref> and [[Gambling and information theory|gambling]].<ref>{{Citation |last=Thorp |first=Edward O. |title=The kelly criterion in blackjack sports betting, and the stock market* |date=2008-01-01 |work=Handbook of Asset and Liability Management |pages=385–428 |editor-last=Zenios |editor-first=S. A. |url=https://linkinghub.elsevier.com/retrieve/pii/B9780444532480500150 |access-date=2025-01-20 |place=San Diego |publisher=North-Holland |doi=10.1016/b978-044453248-0.50015-0 |isbn=978-0-444-53248-0 |editor2-last=Ziemba |editor2-first=W. T.}}</ref><ref>{{Cite journal |last=Haigh |first=John |date=2000 |title=The Kelly Criterion and Bet Comparisons in Spread Betting |url=https://rss.onlinelibrary.wiley.com/doi/10.1111/1467-9884.00251 |journal=Journal of the Royal Statistical Society, Series D (The Statistician) |language=en |volume=49 |issue=4 |pages=531–539 |doi=10.1111/1467-9884.00251 |issn=1467-9884}}</ref> | Information theory also has applications in the [[search for extraterrestrial intelligence]],<ref>{{Cite journal |last1=Doyle |first1=Laurance R. |author-link=Laurance Doyle |last2=McCowan |first2=Brenda |author-link2=Brenda McCowan |last3=Johnston |first3=Simon |last4=Hanser |first4=Sean F. |date=February 2011 |title=Information theory, animal communication, and the search for extraterrestrial intelligence |journal=[[Acta Astronautica]] |language=en |volume=68 |issue=3–4 |pages=406–417 |doi=10.1016/j.actaastro.2009.11.018|bibcode=2011AcAau..68..406D }}</ref> [[black hole information paradox|black holes]],<ref>{{Cite journal |last=Bekenstein |first=Jacob D |date=2004 |title=Black holes and information theory |url=https://www.tandfonline.com/doi/abs/10.1080/00107510310001632523 |journal=Contemporary Physics |volume=45 |issue=1 |pages=31–43 |doi=10.1080/00107510310001632523 |arxiv=quant-ph/0311049 |bibcode=2004ConPh..45...31B |issn=0010-7514}}</ref> [[bioinformatics]],<ref>{{Cite journal |last=Vinga |first=Susana |date=2014-05-01 |title=Information theory applications for biological sequence analysis |url=https://academic.oup.com/bib/article/15/3/376/183705 |journal=Briefings in Bioinformatics |volume=15 |issue=3 |pages=376–389 |doi=10.1093/bib/bbt068 |issn=1467-5463 |pmc=7109941 |pmid=24058049 |archive-date=2022-02-12 |access-date=2025-01-20 |archive-url=https://web.archive.org/web/20220212065704/https://academic.oup.com/bib/article/15/3/376/183705 |url-status=live }}</ref> and [[Gambling and information theory|gambling]].<ref>{{Citation |last=Thorp |first=Edward O. |title=The kelly criterion in blackjack sports betting, and the stock market* |date=2008-01-01 |work=Handbook of Asset and Liability Management |pages=385–428 |editor-last=Zenios |editor-first=S. A. |url=https://linkinghub.elsevier.com/retrieve/pii/B9780444532480500150 |access-date=2025-01-20 |place=San Diego |publisher=North-Holland |doi=10.1016/b978-044453248-0.50015-0 |isbn=978-0-444-53248-0 |editor2-last=Ziemba |editor2-first=W. T. |archive-date=2025-01-25 |archive-url=https://web.archive.org/web/20250125115443/https://linkinghub.elsevier.com/retrieve/pii/B9780444532480500150 |url-status=live |url-access=subscription }}</ref><ref>{{Cite journal |last=Haigh |first=John |date=2000 |title=The Kelly Criterion and Bet Comparisons in Spread Betting |url=https://rss.onlinelibrary.wiley.com/doi/10.1111/1467-9884.00251 |journal=Journal of the Royal Statistical Society, Series D (The Statistician) |language=en |volume=49 |issue=4 |pages=531–539 |doi=10.1111/1467-9884.00251 |issn=1467-9884|url-access=subscription }}</ref> | ||
==See also== | ==See also== | ||
| Line 311: | Line 329: | ||
===The classic work=== | ===The classic work=== | ||
{{refbegin}} | {{refbegin}} | ||
* [[Claude Elwood Shannon|Shannon, C.E.]] (1948), "[[A Mathematical Theory of Communication]]", ''Bell System Technical Journal'', 27, pp. 379–423 & 623–656, July & October, 1948. [http://math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf PDF.] <br />[https://web.archive.org/web/20150409204946/http://cm.bell-labs.com/cm/ms/what/shannonday/paper.html Notes and other formats.] | * [[Claude Elwood Shannon|Shannon, C.E.]] (1948), "[[A Mathematical Theory of Communication]]", ''Bell System Technical Journal'', 27, pp. 379–423 & 623–656, July & October, 1948. [http://math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf PDF.] {{Webarchive|url=https://web.archive.org/web/20190215102750/http://www.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf |date=2019-02-15 }} <br />[https://web.archive.org/web/20150409204946/http://cm.bell-labs.com/cm/ms/what/shannonday/paper.html Notes and other formats.] | ||
* R.V.L. Hartley, [http://www.dotrose.com/etext/90_Miscellaneous/transmission_of_information_1928b.pdf "Transmission of Information"], ''Bell System Technical Journal'', July 1928 | * R.V.L. Hartley, [http://www.dotrose.com/etext/90_Miscellaneous/transmission_of_information_1928b.pdf "Transmission of Information"] {{Webarchive|url=https://web.archive.org/web/20111004200852/http://www.dotrose.com/etext/90_Miscellaneous/transmission_of_information_1928b.pdf |date=2011-10-04 }}, ''Bell System Technical Journal'', July 1928 | ||
* [[Andrey Kolmogorov]] (1968), "[https://www.tandfonline.com/doi/pdf/10.1080/00207166808803030 Three approaches to the quantitative definition of information]" in ''[[International Journal of Computer Mathematics]]'', 2, pp. 157–168. | * [[Andrey Kolmogorov]] (1968), "[https://www.tandfonline.com/doi/pdf/10.1080/00207166808803030 Three approaches to the quantitative definition of information]" in ''[[International Journal of Computer Mathematics]]'', 2, pp. 157–168. | ||
{{refend}} | {{refend}} | ||
| Line 318: | Line 336: | ||
===Other journal articles=== | ===Other journal articles=== | ||
{{refbegin|}} | {{refbegin|}} | ||
* J. L. Kelly Jr., [http://www.princeton.edu/~wbialek/rome/refs/kelly_56.pdf Princeton], "A New Interpretation of Information Rate" ''Bell System Technical Journal'', Vol. 35, July 1956, pp. 917–26. | * J. L. Kelly Jr., [http://www.princeton.edu/~wbialek/rome/refs/kelly_56.pdf Princeton] {{Webarchive|url=https://web.archive.org/web/20200801180142/http://www.princeton.edu/~wbialek/rome/refs/kelly_56.pdf |date=2020-08-01 }}, "A New Interpretation of Information Rate" ''Bell System Technical Journal'', Vol. 35, July 1956, pp. 917–26. | ||
* R. Landauer, [https://archive.today/20071016185253/http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=615478 IEEE.org], "Information is Physical" ''Proc. Workshop on Physics and Computation PhysComp'92'' (IEEE Comp. Sci.Press, Los Alamitos, 1993) pp. 1–4. | * R. Landauer, [https://archive.today/20071016185253/http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=615478 IEEE.org], "Information is Physical" ''Proc. Workshop on Physics and Computation PhysComp'92'' (IEEE Comp. Sci.Press, Los Alamitos, 1993) pp. 1–4. | ||
* {{cite journal|last1=Landauer|first1=R.|year=1961|title=Irreversibility and Heat Generation in the Computing Process|url=http://www.research.ibm.com/journal/rd/441/landauerii.pdf|journal=IBM J. Res. Dev.|volume=5|issue=3|pages=183–191|doi=10.1147/rd.53.0183}} | * {{cite journal|last1=Landauer|first1=R.|year=1961|title=Irreversibility and Heat Generation in the Computing Process|url=http://www.research.ibm.com/journal/rd/441/landauerii.pdf|journal=IBM J. Res. Dev.|volume=5|issue=3|pages=183–191|doi=10.1147/rd.53.0183|archive-date=2009-03-27|access-date=2005-12-01|archive-url=https://web.archive.org/web/20090327151835/http://www.research.ibm.com/journal/rd/441/landauerii.pdf|url-status=live}} | ||
* {{cite arXiv|last1=Timme|first1=Nicholas|last2=Alford|first2=Wesley|last3=Flecker|first3=Benjamin|last4=Beggs|first4=John M.|date=2012|title=Multivariate information measures: an experimentalist's perspective|eprint=1111.6857|class=cs.IT}} | * {{cite arXiv|last1=Timme|first1=Nicholas|last2=Alford|first2=Wesley|last3=Flecker|first3=Benjamin|last4=Beggs|first4=John M.|date=2012|title=Multivariate information measures: an experimentalist's perspective|eprint=1111.6857|class=cs.IT}} | ||
{{refend}} | {{refend}} | ||
| Line 328: | Line 346: | ||
* Alajaji, F. and Chen, P.N. An Introduction to Single-User Information Theory. Singapore: Springer, 2018. {{isbn|978-981-10-8000-5}} | * Alajaji, F. and Chen, P.N. An Introduction to Single-User Information Theory. Singapore: Springer, 2018. {{isbn|978-981-10-8000-5}} | ||
* Arndt, C. ''Information Measures, Information and its Description in Science and Engineering'' (Springer Series: Signals and Communication Technology), 2004, {{isbn|978-3-540-40855-0}} | * Arndt, C. ''Information Measures, Information and its Description in Science and Engineering'' (Springer Series: Signals and Communication Technology), 2004, {{isbn|978-3-540-40855-0}} | ||
* {{cite book | title = Information Theory | first = Robert B. | last = Ash | location = New York | publisher = Dover Publications, Inc. | orig- | * {{cite book | title = Information Theory | first = Robert B. | last = Ash | location = New York | publisher = Dover Publications, Inc. | orig-date = 1965 | year = 1990 | isbn = 0-486-66521-6 | url = https://books.google.com/books?id=ngZhvUfF0UIC&q=intitle:information+intitle:theory+inauthor:ash+conditional+uncertainty&pg=PA16 }} | ||
* [[Gallager, R]]. ''Information Theory and Reliable Communication.'' New York: John Wiley and Sons, 1968. {{isbn|0-471-29048-3}} | * [[Gallager, R]]. ''Information Theory and Reliable Communication.'' New York: John Wiley and Sons, 1968. {{isbn|0-471-29048-3}} | ||
* Goldman, S. ''Information Theory''. New York: Prentice Hall, 1953. New York: Dover 1968 {{isbn|0-486-62209-6}}, 2005 {{isbn|0-486-44271-3}} | * Goldman, S. ''Information Theory''. New York: Prentice Hall, 1953. New York: Dover 1968 {{isbn|0-486-62209-6}}, 2005 {{isbn|0-486-44271-3}} | ||
* {{cite book |last1=Cover |first1=Thomas |author-link1=Thomas M. Cover |last2=Thomas |first2=Joy A. |title=Elements of information theory |edition=2nd |location=New York |publisher=[[Wiley-Interscience]] |date=2006 |isbn=0-471-24195-4}} | * {{cite book |last1=Cover |first1=Thomas |author-link1=Thomas M. Cover |last2=Thomas |first2=Joy A. |title=Elements of information theory |edition=2nd |location=New York |publisher=[[Wiley-Interscience]] |date=2006 |isbn=0-471-24195-4}} | ||
* [[Csiszar, I]], Korner, J. ''Information Theory: Coding Theorems for Discrete Memoryless Systems'' Akademiai Kiado: 2nd edition, 1997. {{isbn|963-05-7440-3}} | * [[Csiszar, I]], Korner, J. ''Information Theory: Coding Theorems for Discrete Memoryless Systems'' Akademiai Kiado: 2nd edition, 1997. {{isbn|963-05-7440-3}} | ||
* [[David J. C. MacKay|MacKay, David J. C.]] ''[http://www.inference.phy.cam.ac.uk/mackay/itila/book.html Information Theory, Inference, and Learning Algorithms]'' Cambridge: Cambridge University Press, 2003. {{isbn|0-521-64298-1}} | * [[David J. C. MacKay|MacKay, David J. C.]] ''[http://www.inference.phy.cam.ac.uk/mackay/itila/book.html Information Theory, Inference, and Learning Algorithms] {{Webarchive|url=https://web.archive.org/web/20160217105359/http://www.inference.phy.cam.ac.uk/mackay/itila/book.html |date=2016-02-17 }}'' Cambridge: Cambridge University Press, 2003. {{isbn|0-521-64298-1}} | ||
* Mansuripur, M. ''Introduction to Information Theory''. New York: Prentice Hall, 1987. {{isbn|0-13-484668-0}} | * Mansuripur, M. ''Introduction to Information Theory''. New York: Prentice Hall, 1987. {{isbn|0-13-484668-0}} | ||
* [[Robert McEliece|McEliece, R]]. ''The Theory of Information and Coding''. Cambridge, 2002. {{isbn|978-0521831857}} | * [[Robert McEliece|McEliece, R]]. ''The Theory of Information and Coding''. Cambridge, 2002. {{isbn|978-0521831857}} | ||
* [[John R. Pierce|Pierce, JR]]. "An introduction to information theory: symbols, signals and noise". Dover (2nd Edition). 1961 (reprinted by Dover 1980). | * [[John R. Pierce|Pierce, JR]]. "An introduction to information theory: symbols, signals and noise". Dover (2nd Edition). 1961 (reprinted by Dover 1980). | ||
* {{cite book | title = An Introduction to Information Theory | first = Fazlollah M. | last = Reza | author-link=Fazlollah Reza| publisher = Dover Publications, Inc. | location = New York | orig- | * {{cite book | title = An Introduction to Information Theory | first = Fazlollah M. | last = Reza | author-link = Fazlollah Reza | publisher = Dover Publications, Inc. | location = New York | orig-date = 1961 | year = 1994 | isbn = 0-486-68210-2 | url = https://books.google.com/books?id=RtzpRAiX6OgC&q=intitle:%22An+Introduction+to+Information+Theory%22++%22entropy+of+a+simple+source%22&pg=PA8 }} | ||
* {{cite book |last1=Shannon |first1=Claude |author-link1=Claude Shannon |last2=Weaver |first2=Warren |author-link2=Warren Weaver |date=1949 |title=The Mathematical Theory of Communication |url=http://monoskop.org/images/b/be/Shannon_Claude_E_Weaver_Warren_The_Mathematical_Theory_of_Communication_1963.pdf |location=[[Urbana, Illinois]] |publisher=[[University of Illinois Press]] |lccn=49-11922 |isbn=0-252-72548-4}} | * {{cite book |last1=Shannon |first1=Claude |author-link1=Claude Shannon |last2=Weaver |first2=Warren |author-link2=Warren Weaver |date=1949 |title=The Mathematical Theory of Communication |url=http://monoskop.org/images/b/be/Shannon_Claude_E_Weaver_Warren_The_Mathematical_Theory_of_Communication_1963.pdf |location=[[Urbana, Illinois]] |publisher=[[University of Illinois Press]] |lccn=49-11922 |isbn=0-252-72548-4 }} | ||
* Stone, JV. Chapter 1 of book [https://jamesstone.sites.sheffield.ac.uk/books/information-theory-2nd-edition "Information Theory: A Tutorial Introduction"], University of Sheffield, England, 2014. {{isbn|978-0956372857}}. | * Stone, JV. Chapter 1 of book [https://jamesstone.sites.sheffield.ac.uk/books/information-theory-2nd-edition "Information Theory: A Tutorial Introduction"], University of Sheffield, England, 2014. {{isbn|978-0956372857}}. | ||
* Yeung, RW. ''[http://iest2.ie.cuhk.edu.hk/~whyeung/book/ A First Course in Information Theory]'' Kluwer Academic/Plenum Publishers, 2002. {{isbn|0-306-46791-7}}. | * Yeung, RW. ''[http://iest2.ie.cuhk.edu.hk/~whyeung/book/ A First Course in Information Theory] {{Webarchive|url=https://web.archive.org/web/20060615074602/http://iest2.ie.cuhk.edu.hk/~whyeung/book/ |date=2006-06-15 }}'' Kluwer Academic/Plenum Publishers, 2002. {{isbn|0-306-46791-7}}. | ||
* Yeung, RW. ''[http://iest2.ie.cuhk.edu.hk/~whyeung/book2/ Information Theory and Network Coding]'' Springer 2008, 2002. {{isbn|978-0-387-79233-0}} | * Yeung, RW. ''[http://iest2.ie.cuhk.edu.hk/~whyeung/book2/ Information Theory and Network Coding]'' Springer 2008, 2002. {{isbn|978-0-387-79233-0}} | ||
{{refend}} | {{refend}} | ||
| Line 364: | Line 382: | ||
* {{SpringerEOM |title=Information |id=p/i051040}} | * {{SpringerEOM |title=Information |id=p/i051040}} | ||
* Lambert F. L. (1999), "[http://jchemed.chem.wisc.edu/Journal/Issues/1999/Oct/abs1385.html Shuffled Cards, Messy Desks, and Disorderly Dorm Rooms - Examples of Entropy Increase? Nonsense!]", ''Journal of Chemical Education'' | * Lambert F. L. (1999), "[http://jchemed.chem.wisc.edu/Journal/Issues/1999/Oct/abs1385.html Shuffled Cards, Messy Desks, and Disorderly Dorm Rooms - Examples of Entropy Increase? Nonsense!]", ''Journal of Chemical Education'' | ||
* [http://www.itsoc.org/ IEEE Information Theory Society] and [https://www.itsoc.org/resources/surveys ITSOC Monographs, Surveys, and Reviews] {{Webarchive|url=https://web.archive.org/web/20180612144016/https://www.itsoc.org/resources/surveys |date=2018-06-12 }} | * [http://www.itsoc.org/ IEEE Information Theory Society] {{Webarchive|url=https://web.archive.org/web/20190801085524/http://www.itsoc.org/ |date=2019-08-01 }} and [https://www.itsoc.org/resources/surveys ITSOC Monographs, Surveys, and Reviews] {{Webarchive|url=https://web.archive.org/web/20180612144016/https://www.itsoc.org/resources/surveys |date=2018-06-12 }} | ||
{{Cybernetics}} | {{Cybernetics}} | ||
Latest revision as of 15:49, 13 November 2025
Template:Short description Script error: No such module "Distinguish".
Information theory is the mathematical study of the quantification, storage, and communication of information. The field was established and formalized by Claude Shannon in the 1940s,[1] though early contributions were made in the 1920s through the works of Harry Nyquist and Ralph Hartley. It is at the intersection of electronic engineering, mathematics, statistics, computer science, neurobiology, physics, and electrical engineering.[2][3]
A key measure in information theory is entropy. Entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a fair coin flip (which has two equally likely outcomes) provides less information (lower entropy, less uncertainty) than identifying the outcome from a roll of a die (which has six equally likely outcomes). Some other important measures in information theory are mutual information, channel capacity, error exponents, and relative entropy. Important sub-fields of information theory include source coding, algorithmic complexity theory, algorithmic information theory and information-theoretic security.
Applications of fundamental topics of information theory include source coding/data compression (e.g. for ZIP files), and channel coding/error detection and correction (e.g. for DSL). Its impact has been crucial to the success of the Voyager missions to deep space,[4] the invention of the compact disc, the feasibility of mobile phones and the development of the Internet and artificial intelligence.[5][6][3] The theory has also found applications in other areas, including statistical inference,[7] cryptography, neurobiology,[8] perception,[9] signal processing,[2] linguistics, the evolution[10] and function[11] of molecular codes (bioinformatics), thermal physics,[12] molecular dynamics,[13] black holes, quantum computing, information retrieval, intelligence gathering, plagiarism detection,[14] pattern recognition, anomaly detection,[15] the analysis of music,[16][17] art creation,[18] imaging system design,[19] study of outer space,[20] the dimensionality of space,[21] and epistemology.[22]
Overview
Information theory studies the transmission, processing, extraction, and utilization of information. Abstractly, information can be thought of as the resolution of uncertainty. In the case of communication of information over a noisy channel, this abstract concept was formalized in 1948 by Claude Shannon in a paper entitled A Mathematical Theory of Communication, in which information is thought of as a set of possible messages, and the goal is to send these messages over a noisy channel, and to have the receiver reconstruct the message with low probability of error, in spite of the channel noise. Shannon's main result, the noisy-channel coding theorem, showed that, in the limit of many channel uses, the rate of information that is asymptotically achievable is equal to the channel capacity, a quantity dependent merely on the statistics of the channel over which the messages are sent.[8]
Coding theory is concerned with finding explicit methods, called codes, for increasing the efficiency and reducing the error rate of data communication over noisy channels to near the channel capacity. These codes can be roughly subdivided into data compression (source coding) and error-correction (channel coding) techniques. In the latter case, it took many years to find the methods Shannon's work proved were possible.[23][24]
A third class of information theory codes are cryptographic algorithms (both codes and ciphers). Concepts, methods and results from coding theory and information theory are widely used in cryptography and cryptanalysis,[25] such as the unit ban.
Historical background
Script error: No such module "Labelled list hatnote".
The landmark event establishing the discipline of information theory and bringing it to immediate worldwide attention was the publication of Claude Shannon's classic paper "A Mathematical Theory of Communication" in the Bell System Technical Journal in July and October 1948. Historian James Gleick rated the paper as the most important development of 1948, noting that the paper was "even more profound and more fundamental" than the transistor.Template:Sfn He came to be known as the "father of information theory".[26][27][28] Shannon outlined some of his initial ideas of information theory as early as 1939 in a letter to Vannevar Bush.[28]
Prior to this paper, limited information-theoretic ideas had been developed at Bell Labs, all implicitly assuming events of equal probability. Harry Nyquist's 1924 paper, Certain Factors Affecting Telegraph Speed, contains a theoretical section quantifying "intelligence" and the "line speed" at which it can be transmitted by a communication system, giving the relation Template:Math (recalling the Boltzmann constant), where W is the speed of transmission of intelligence, m is the number of different voltage levels to choose from at each time step, and K is a constant. Ralph Hartley's 1928 paper, Transmission of Information, uses the word information as a measurable quantity, reflecting the receiver's ability to distinguish one sequence of symbols from any other, thus quantifying information as Template:Math, where S was the number of possible symbols, and n the number of symbols in a transmission. The unit of information was therefore the decimal digit, which since has sometimes been called the hartley in his honor as a unit or scale or measure of information. Alan Turing in 1940 used similar ideas as part of the statistical analysis of the breaking of the German second world war Enigma ciphers.Script error: No such module "Unsubst".
Much of the mathematics behind information theory with events of different probabilities were developed for the field of thermodynamics by Ludwig Boltzmann and J. Willard Gibbs. Connections between information-theoretic entropy and thermodynamic entropy, including the important contributions by Rolf Landauer in the 1960s, are explored in Entropy in thermodynamics and information theory.Script error: No such module "Unsubst".
In Shannon's revolutionary and groundbreaking paper, the work for which had been substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion:[29]
- "The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point."
With it came the ideas of:
- The information entropy and redundancy of a source, and its relevance through the source coding theorem;
- The mutual information, and the channel capacity of a noisy channel, including the promise of perfect loss-free communication given by the noisy-channel coding theorem;
- The practical result of the Shannon–Hartley law for the channel capacity of a Gaussian channel; as well as
- The bit—a new way of seeing the most fundamental unit of information.Script error: No such module "Unsubst".
Quantities of information
Script error: No such module "Unsubst".Script error: No such module "Labelled list hatnote".
Information theory is based on probability theory and statistics, where quantified information is usually described in terms of bits. Information theory often concerns itself with measures of information of the distributions associated with random variables. One of the most important measures is called entropy, which forms the building block of many other measures. Entropy allows quantification of measure of information in a single random variable.[30]
Another useful concept is mutual information defined on two random variables, which describes the measure of information in common between those variables, which can be used to describe their correlation. The former quantity is a property of the probability distribution of a random variable and gives a limit on the rate at which data generated by independent samples with the given distribution can be reliably compressed. The latter is a property of the joint distribution of two random variables, and is the maximum rate of reliable communication across a noisy channel in the limit of long block lengths, when the channel statistics are determined by the joint distribution.
The choice of logarithmic base in the following formulae determines the unit of information entropy that is used. A common unit of information is the bit or shannon, based on the binary logarithm. Other units include the nat, which is based on the natural logarithm, and the decimal digit, which is based on the common logarithm.
In what follows, an expression of the form Template:Math is considered by convention to be equal to zero whenever Template:Math. This is justified because for any logarithmic base.
Entropy of an information source
Based on the probability mass function of a source, the Shannon entropy H, in units of bits per symbol, is defined as the expected value of the information content of the symbols.[31][32]
The amount of information conveyed by an individual source symbol with probability is known as its self-information or surprisal, . This quantity is defined as:[33][34]
A less probable symbol has a larger surprisal, meaning its occurrence provides more information.[33] The entropy is the weighted average of the surprisal of all possible symbols from the source's probability distribution:[35][36]
Intuitively, the entropy of a discrete random variable Template:Math is a measure of the amount of uncertainty associated with the value of when only its distribution is known.[31] A high entropy indicates the outcomes are more evenly distributed, making the result harder to predict.[37]
For example, if one transmits 1000 bits (0s and 1s), and the value of each of these bits is known to the receiver (has a specific value with certainty) ahead of transmission, no information is transmitted. If, however, each bit is independently and equally likely to be 0 or 1, 1000 shannons of information (more often called bits) have been transmitted.[38]
Properties
A key property of entropy is that it is maximized when all the messages in the message space are equiprobable. For a source with Template:Mvar possible symbols, where for all , the entropy is given by:[39]
This maximum value represents the most unpredictable state.[35]
For a source that emits a sequence of symbols that are independent and identically distributed (i.i.d.), the total entropy of the message is bits. If the source data symbols are identically distributed but not independent, the entropy of a message of length will be less than .[40][41]
Units
The choice of the logarithmic base in the entropy formula determines the unit of entropy used:[33][35]
- A base-2 logarithm (as shown in the main formula) measures entropy in bits per symbol. This unit is also sometimes called the shannon in honor of Claude Shannon.[31]
- A Natural logarithm (base e) measures entropy in nats per symbol. This is often used in theoretical analysis as it avoids the need for scaling constants (like ln 2) in derivations.[42]
- Other bases are also possible. A base-10 logarithm measures entropy in decimal digits, or hartleys, per symbol.[32] A base-256 logarithm measures entropy in bytes per symbol, since 28 = 256.[43]
Binary Entropy Function
The special case of information entropy for a random variable with two outcomes (a Bernoulli trial) is the binary entropy function. This is typically calculated using a base-2 logarithm, and its unit is the shannon.[44] If one outcome has probability Template:Mvar, the other has probability Template:Math. The entropy is given by:[45]
This function is depicted in the plot shown above, reaching its maximum of 1 bit when Template:Math, corresponding to the highest uncertainty.
Joint entropy
The Template:Em of two discrete random variables Template:Math and Template:Math is merely the entropy of their pairing: Template:Math. This implies that if Template:Math and Template:Math are independent, then their joint entropy is the sum of their individual entropies.
For example, if Template:Math represents the position of a chess piece—Template:Math the row and Template:Math the column, then the joint entropy of the row of the piece and the column of the piece will be the entropy of the position of the piece.
Despite similar notation, joint entropy should not be confused with Template:Em.
Conditional entropy (equivocation)
The Template:Em or conditional uncertainty of Template:Math given random variable Template:Math (also called the equivocation of Template:Math about Template:Math) is the average conditional entropy over Template:Math:Template:Sfn
Because entropy can be conditioned on a random variable or on that random variable being a certain value, care should be taken not to confuse these two definitions of conditional entropy, the former of which is in more common use. A basic property of this form of conditional entropy is that:
Mutual information (transinformation)
Mutual information measures the amount of information that can be obtained about one random variable by observing another. It is important in communication where it can be used to maximize the amount of information shared between sent and received signals. The mutual information of Template:Math relative to Template:Math is given by:
where Template:Math (Specific mutual Information) is the pointwise mutual information.
A basic property of the mutual information is that:
That is, knowing , we can save an average of Template:Math bits in encoding compared to not knowing .
Mutual information is symmetric:
Mutual information can be expressed as the average Kullback–Leibler divergence (information gain) between the posterior probability distribution of given the value of and the prior distribution on :
In other words, this is a measure of how much, on the average, the probability distribution on will change if we are given the value of . This is often recalculated as the divergence from the product of the marginal distributions to the actual joint distribution:
Mutual information is closely related to the log-likelihood ratio test in the context of contingency tables and the multinomial distribution and to Pearson's χ2 test: mutual information can be considered a statistic for assessing independence between a pair of variables, and has a well-specified asymptotic distribution.
Kullback–Leibler divergence (information gain)
The Kullback–Leibler divergence (or information divergence, information gain, or relative entropy) is a way of comparing two distributions: a "true" probability distribution Template:Tmath, and an arbitrary probability distribution Template:Tmath. If we compress data in a manner that assumes Template:Tmath is the distribution underlying some data, when, in reality, Template:Tmath is the correct distribution, the Kullback–Leibler divergence is the number of average additional bits per datum necessary for compression. It is thus defined
Although it is sometimes used as a 'distance metric', KL divergence is not a true metric since it is not symmetric and does not satisfy the triangle inequality (making it a semi-quasimetric).
Another interpretation of the KL divergence is the "unnecessary surprise" introduced by a prior from the truth: suppose a number is about to be drawn randomly from a discrete set with probability distribution Template:Tmath. If Alice knows the true distribution Template:Tmath, while Bob believes (has a prior) that the distribution is Template:Tmath, then Bob will be more surprised than Alice, on average, upon seeing the value of . The KL divergence is the (objective) expected value of Bob's (subjective) surprisal minus Alice's surprisal, measured in bits if the log is in base 2. In this way, the extent to which Bob's prior is "wrong" can be quantified in terms of how "unnecessarily surprised" it is expected to make him.
Directed Information
Directed information, , is an information theory measure that quantifies the information flow from the random process to the random process . The term directed information was coined by James Massey and is defined as:
- ,
where is the conditional mutual information .
In contrast to mutual information, directed information is not symmetric. The measures the information bits that are transmitted causallyTemplate:Clarify from to . The Directed information has many applications in problems where causality plays an important role such as capacity of channel with feedback,[46][47] capacity of discrete memoryless networks with feedback,[48] gambling with causal side information,[49] compression with causal side information,[50] real-time control communication settings,[51][52] and in statistical physics.[53]
Other quantities
Other important information theoretic quantities include the Rényi entropy and the Tsallis entropy (generalizations of the concept of entropy), differential entropy (a generalization of quantities of information to continuous distributions), and the conditional mutual information. Also, pragmatic information has been proposed as a measure of how much information has been used in making a decision.
Coding theory
Script error: No such module "Unsubst".Script error: No such module "Labelled list hatnote".
Coding theory is one of the most important and direct applications of information theory. It can be subdivided into source coding theory and channel coding theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data, which is the information entropy of the source.
- Data compression (source coding): There are two formulations for the compression problem:
- Lossless data compression: the data must be reconstructed exactly;
- Lossy data compression: allocates bits needed to reconstruct the data, within a specified fidelity level measured by a distortion function. This subset of information theory is called rate–distortion theory.
- Error-correcting codes (channel coding): While data compression removes as much redundancy as possible, an error-correcting code adds just the right kind of redundancy (i.e., error correction) needed to transmit the data efficiently and faithfully across a noisy channel.
This division of coding theory into compression and transmission is justified by the information transmission theorems, or source–channel separation theorems that justify the use of bits as the universal currency for information in many contexts. However, these theorems only hold in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than one receiver (the broadcast channel) or intermediary "helpers" (the relay channel), or more general networks, compression followed by transmission may no longer be optimal.
Source theory
Any process that generates successive messages can be considered a Template:Em of information. A memoryless source is one in which each message is an independent identically distributed random variable, whereas the properties of ergodicity and stationarity impose less restrictive constraints. All such sources are stochastic. These terms are well studied in their own right outside information theory.
Rate
Information rate is the average entropy per symbol. For memoryless sources, this is merely the entropy of each symbol, while, in the case of a stationary stochastic process, it is:
that is, the conditional entropy of a symbol given all the previous symbols generated. For the more general case of a process that is not necessarily stationary, the average rate is:
that is, the limit of the joint entropy per symbol. For stationary sources, these two expressions give the same result.[54]
The information rate is defined as:
It is common in information theory to speak of the "rate" or "entropy" of a language. This is appropriate, for example, when the source of information is English prose. The rate of a source of information is related to its redundancy and how well it can be compressed, the subject of Template:Em.
Channel capacity
Script error: No such module "Labelled list hatnote".
Communications over a channel is the primary motivation of information theory. However, channels often fail to produce exact reconstruction of a signal; noise, periods of silence, and other forms of signal corruption often degrade quality.
Consider the communications process over a discrete channel. A simple model of the process is shown below:
Here represents the space of messages transmitted, and the space of messages received during a unit time over our channel. Let Template:Math be the conditional probability distribution function of given . We will consider Template:Math to be an inherent fixed property of our communications channel (representing the nature of the noise of our channel). Then the joint distribution of and is completely determined by our channel and by our choice of Template:Math, the marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximize the rate of information, or the signal, we can communicate over the channel. The appropriate measure for this is the mutual information, and this maximum mutual information is called the Template:Em and is given by:
This capacity has the following property related to communicating at information rate R (where R is usually bits per symbol). For any information rate R < C and coding error ε > 0, for large enough N, there exists a code of length N and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ε; that is, it is always possible to transmit with arbitrarily small block error. In addition, for any rate R > C, it is impossible to transmit with arbitrarily small block error.
Channel coding is concerned with finding such nearly optimal codes that can be used to transmit data over a noisy channel with a small coding error at a rate near the channel capacity.
Capacity of particular channel models
- A continuous-time analog communications channel subject to Gaussian noise—see Shannon–Hartley theorem.
- A binary symmetric channel (BSC) with crossover probability p is a binary input, binary output channel that flips the input bit with probability p. The BSC has a capacity of Template:Math bits per channel use, where Template:Math is the binary entropy function to the base-2 logarithm:
- A binary erasure channel (BEC) with erasure probability p is a binary input, ternary output channel. The possible channel outputs are 0, 1, and a third symbol 'e' called an erasure. The erasure represents complete loss of information about an input bit. The capacity of the BEC is 1 − p bits per channel use.
Channels with memory and directed information
In practice many channels have memory. Namely, at time the channel is given by the conditional probability. It is often more comfortable to use the notation and the channel become . In such a case the capacity is given by the mutual information rate when there is no feedback available and the Directed information rate in the case that either there is feedback or not[46][55] (if there is no feedback the directed information equals the mutual information).
Fungible information
Fungible information is the information for which the means of encoding is not important.[56] Classical information theorists and computer scientists are mainly concerned with information of this sort. It is sometimes referred as speakable information.[57]
Applications to other fields
Intelligence uses and secrecy applications
Script error: No such module "Unsubst". Information theoretic concepts apply to cryptography and cryptanalysis. Turing's information unit, the ban, was used in the Ultra project, breaking the German Enigma machine code and hastening the end of World War II in Europe. Shannon himself defined an important concept now called the unicity distance. Based on the redundancy of the plaintext, it attempts to give a minimum amount of ciphertext necessary to ensure unique decipherability.
Information theory leads us to believe it is much more difficult to keep secrets than it might first appear. A brute force attack can break systems based on asymmetric key algorithms or on most commonly used methods of symmetric key algorithms (sometimes called secret key algorithms), such as block ciphers. The security of all such methods comes from the assumption that no known attack can break them in a practical amount of time.
Information theoretic security refers to methods such as the one-time pad that are not vulnerable to such brute force attacks. In such cases, the positive conditional mutual information between the plaintext and ciphertext (conditioned on the key) can ensure proper transmission, while the unconditional mutual information between the plaintext and ciphertext remains zero, resulting in absolutely secure communications. In other words, an eavesdropper would not be able to improve his or her guess of the plaintext by gaining knowledge of the ciphertext but not of the key. However, as in any other cryptographic system, care must be used to correctly apply even information-theoretically secure methods; the Venona project was able to crack the one-time pads of the Soviet Union due to their improper reuse of key material.
Pseudorandom number generation
Script error: No such module "Unsubst". Pseudorandom number generators are widely available in computer language libraries and application programs. They are, almost universally, unsuited to cryptographic use as they do not evade the deterministic nature of modern computer equipment and software. A class of improved random number generators is termed cryptographically secure pseudorandom number generators, but even they require random seeds external to the software to work as intended. These can be obtained via extractors, if done carefully. The measure of sufficient randomness in extractors is min-entropy, a value related to Shannon entropy through Rényi entropy; Rényi entropy is also used in evaluating randomness in cryptographic systems. Although related, the distinctions among these measures mean that a random variable with high Shannon entropy is not necessarily satisfactory for use in an extractor and so for cryptography uses.
Seismic exploration
One early commercial application of information theory was in the field of seismic oil exploration. Work in this field made it possible to strip off and separate the unwanted noise from the desired seismic signal. Information theory and digital signal processing offer a major improvement of resolution and image clarity over previous analog methods.[58]
Semiotics
Semioticians Template:Ill and Winfried Nöth both considered Charles Sanders Peirce as having created a theory of information in his works on semiotics.[59]Template:Rp[60]Template:Rp Nauta defined semiotic information theory as the study of "the internal processes of coding, filtering, and information processing."[59]Template:Rp
Concepts from information theory such as redundancy and code control have been used by semioticians such as Umberto Eco and Template:Ill to explain ideology as a form of message transmission whereby a dominant social class emits its message by using signs that exhibit a high degree of redundancy such that only one message is decoded among a selection of competing ones.[61]
Integrated process organization of neural information
Quantitative information theoretic methods have been applied in cognitive science to analyze the integrated process organization of neural information in the context of the binding problem in cognitive neuroscience.[62] In this context, either an information-theoretical measure, such as Template:Em (Gerald Edelman and Giulio Tononi's functional clustering model and dynamic core hypothesis (DCH)[63]) or Template:Em (Tononi's integrated information theory (IIT) of consciousness[64][65][66]), is defined (on the basis of a reentrant process organization, i.e. the synchronization of neurophysiological activity between groups of neuronal populations), or the measure of the minimization of free energy on the basis of statistical methods (Karl J. Friston's free energy principle (FEP), an information-theoretical measure which states that every adaptive change in a self-organized system leads to a minimization of free energy, and the Bayesian brain hypothesis[67][68][69][70][71]).
Miscellaneous applications
Information theory also has applications in the search for extraterrestrial intelligence,[72] black holes,[73] bioinformatics,[74] and gambling.[75][76]
See also
Script error: No such module "Portal". Template:Cols
- Algorithmic probability
- Bayesian inference
- Communication theory
- Constructor theory – a generalization of information theory that includes quantum information
- Formal science
- Inductive probability
- Info-metrics
- Minimum message length
- Minimum description length
- Philosophy of information
Applications
- Active networking
- Cryptanalysis
- Cryptography
- Cybernetics
- Entropy in thermodynamics and information theory
- Gambling
- Intelligence (information gathering)
- Seismic exploration
History
- Hartley, R.V.L.
- History of information theory
- Shannon, C.E.
- Timeline of information theory
- Yockey, H.P.
- Andrey Kolmogorov
Theory
- Coding theory
- Detection theory
- Estimation theory
- Fisher information
- Information algebra
- Information asymmetry
- Information field theory
- Information geometry
- Information theory and measure theory
- Kolmogorov complexity
- List of unsolved problems in information theory
- Logic of information
- Network coding
- Philosophy of information
- Quantum information science
- Source coding
Concepts
- Ban (unit)
- Channel capacity
- Communication channel
- Communication source
- Conditional entropy
- Covert channel
- Data compression
- Decoder
- Differential entropy
- Fungible information
- Information fluctuation complexity
- Information entropy
- Joint entropy
- Kullback–Leibler divergence
- Mutual information
- Pointwise mutual information (PMI)
- Receiver (information theory)
- Redundancy
- Rényi entropy
- Self-information
- Unicity distance
- Variety
- Hamming distance
- Perplexity
References
Further reading
The classic work
- Shannon, C.E. (1948), "A Mathematical Theory of Communication", Bell System Technical Journal, 27, pp. 379–423 & 623–656, July & October, 1948. PDF. Template:Webarchive
Notes and other formats. - R.V.L. Hartley, "Transmission of Information" Template:Webarchive, Bell System Technical Journal, July 1928
- Andrey Kolmogorov (1968), "Three approaches to the quantitative definition of information" in International Journal of Computer Mathematics, 2, pp. 157–168.
Other journal articles
- J. L. Kelly Jr., Princeton Template:Webarchive, "A New Interpretation of Information Rate" Bell System Technical Journal, Vol. 35, July 1956, pp. 917–26.
- R. Landauer, IEEE.org, "Information is Physical" Proc. Workshop on Physics and Computation PhysComp'92 (IEEE Comp. Sci.Press, Los Alamitos, 1993) pp. 1–4.
- Script error: No such module "Citation/CS1".
- Script error: No such module "citation/CS1".
Textbooks on information theory
- Alajaji, F. and Chen, P.N. An Introduction to Single-User Information Theory. Singapore: Springer, 2018. Template:Isbn
- Arndt, C. Information Measures, Information and its Description in Science and Engineering (Springer Series: Signals and Communication Technology), 2004, Template:Isbn
- Script error: No such module "citation/CS1".
- Gallager, R. Information Theory and Reliable Communication. New York: John Wiley and Sons, 1968. Template:Isbn
- Goldman, S. Information Theory. New York: Prentice Hall, 1953. New York: Dover 1968 Template:Isbn, 2005 Template:Isbn
- Script error: No such module "citation/CS1".
- Csiszar, I, Korner, J. Information Theory: Coding Theorems for Discrete Memoryless Systems Akademiai Kiado: 2nd edition, 1997. Template:Isbn
- MacKay, David J. C. Information Theory, Inference, and Learning Algorithms Template:Webarchive Cambridge: Cambridge University Press, 2003. Template:Isbn
- Mansuripur, M. Introduction to Information Theory. New York: Prentice Hall, 1987. Template:Isbn
- McEliece, R. The Theory of Information and Coding. Cambridge, 2002. Template:Isbn
- Pierce, JR. "An introduction to information theory: symbols, signals and noise". Dover (2nd Edition). 1961 (reprinted by Dover 1980).
- Script error: No such module "citation/CS1".
- Script error: No such module "citation/CS1".
- Stone, JV. Chapter 1 of book "Information Theory: A Tutorial Introduction", University of Sheffield, England, 2014. Template:Isbn.
- Yeung, RW. A First Course in Information Theory Template:Webarchive Kluwer Academic/Plenum Publishers, 2002. Template:Isbn.
- Yeung, RW. Information Theory and Network Coding Springer 2008, 2002. Template:Isbn
Other books
- Leon Brillouin, Science and Information Theory, Mineola, N.Y.: Dover, [1956, 1962] 2004. Template:Isbn
- Script error: No such module "citation/CS1".
- A. I. Khinchin, Mathematical Foundations of Information Theory, New York: Dover, 1957. Template:Isbn
- H. S. Leff and A. F. Rex, Editors, Maxwell's Demon: Entropy, Information, Computing, Princeton University Press, Princeton, New Jersey (1990). Template:Isbn
- Robert K. Logan. What is Information? - Propagating Organization in the Biosphere, the Symbolosphere, the Technosphere and the Econosphere, Toronto: DEMO Publishing.
- Tom Siegfried, The Bit and the Pendulum, Wiley, 2000. Template:Isbn
- Charles Seife, Decoding the Universe, Viking, 2006. Template:Isbn
- Jeremy Campbell, Grammatical Man, Touchstone/Simon & Schuster, 1982, Template:Isbn
- Henri Theil, Economics and Information Theory, Rand McNally & Company - Chicago, 1967.
- Escolano, Suau, Bonev, Information Theory in Computer Vision and Pattern Recognition, Springer, 2009. Template:Isbn
- Vlatko Vedral, Decoding Reality: The Universe as Quantum Information, Oxford University Press 2010. Template:ISBN
External links
Template:Sister project Template:Library resources box
- Script error: No such module "Template wrapper".Template:Main other
- Lambert F. L. (1999), "Shuffled Cards, Messy Desks, and Disorderly Dorm Rooms - Examples of Entropy Increase? Nonsense!", Journal of Chemical Education
- IEEE Information Theory Society Template:Webarchive and ITSOC Monographs, Surveys, and Reviews Template:Webarchive
Script error: No such module "Navbox". Template:Informatics Template:Compression methods Script error: No such module "Navbox". Template:Computer science
- ↑ Script error: No such module "Citation/CS1".
- ↑ a b Script error: No such module "Citation/CS1".
- ↑ a b Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ a b Script error: No such module "citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Template:Cite magazine
- ↑ a b Script error: No such module "citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ a b c Script error: No such module "citation/CS1".
- ↑ a b Script error: No such module "citation/CS1".
- ↑ a b c Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ a b c Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ a b Script error: No such module "citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ a b Script error: No such module "citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Nöth, Winfried (1981). "Semiotics of ideology". Semiotica, Issue 148.
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "Citation/CS1".