Bit: Difference between revisions
imported>Axhawk m Crosslink to the article on 'Word (computer architecture)', since it is otherwise undefined where mentioned in the summary |
imported>DreamRimmer bot II m Standardise list-defined references format (bot) |
||
| Line 10: | Line 10: | ||
A contiguous group of binary digits is commonly called a ''[[bit string]]'', a bit vector, or a single-dimensional (or multi-dimensional) ''[[bit array]]''. A group of eight bits is called one ''[[byte]]'', but historically the size of the byte is not strictly defined.<ref name="Bemer_2000"/> Frequently, half, full, double and quadruple [[Word (computer architecture)|words]] consist of a number of bytes which is a low power of two. A string of four bits is usually a ''[[nibble]]''. | A contiguous group of binary digits is commonly called a ''[[bit string]]'', a bit vector, or a single-dimensional (or multi-dimensional) ''[[bit array]]''. A group of eight bits is called one ''[[byte]]'', but historically the size of the byte is not strictly defined.<ref name="Bemer_2000"/> Frequently, half, full, double and quadruple [[Word (computer architecture)|words]] consist of a number of bytes which is a low power of two. A string of four bits is usually a ''[[nibble]]''. | ||
In [[information theory]], one bit is the [[information entropy]] of a random [[Binary number|binary]] variable that is 0 or 1 with equal probability,<ref name="Anderson_2006"/> or the information that is gained when the value of such a variable becomes known.<ref name="Haykin_2006"/><ref name="IEEE_260"/> As a [[unit of information]], the bit is also known as a ''[[shannon (unit)|shannon]]'',<ref name="Rowlett"/> named after [[Claude E. Shannon]]. As a measure of the length of a digital string that is encoded as symbols over a 0 | In [[information theory]], one bit is the [[information entropy]] of a random [[Binary number|binary]] variable that is {{mono|0}} or {{mono|1}} with equal probability,<ref name="Anderson_2006"/> or the information that is gained when the value of such a variable becomes known.<ref name="Haykin_2006"/><ref name="IEEE_260"/> As a [[unit of information]], the bit is also known as a ''[[shannon (unit)|shannon]]'',<ref name="Rowlett"/> named after [[Claude E. Shannon]]. As a measure of the length of a digital string that is encoded as symbols over a binary alphabet (i.e. <math>\Sigma = \{\texttt{0}, \texttt{1}\}</math>), the bit has been called a binit,<ref>{{cite book |last1=Breipohl |first1=Arthur M. |title=Adaptive Communication Systems |date=1963-08-18 |publisher=University of New Mexico |page=7 |url=https://digitalrepository.unm.edu/ece_etds/425/ |access-date=7 January 2025}}</ref> but this usage is now rare.<ref>{{cite dictionary |title=binit |url=https://www.thefreedictionary.com/binit |dictionary=The Free Dictionary |access-date=7 January 2025}}</ref> | ||
In [[data compression]], the goal is to find a shorter representation for a string, so that it requires fewer bits when stored or transmitted; the string would be compressed into the shorter representation before doing so, and then decompressed into its original form when read from storage or received. The field of [[algorithmic information theory]] is devoted to the study of the irreducible information content of a string (i.e., its shortest-possible representation length, in bits), under the assumption that the receiver has minimal ''a priori'' knowledge of the method used to compress the string. In [[error detection and correction]], the goal is to add redundant data to a string, to enable the detection or correction of errors during storage or transmission; the redundant data would be computed before doing so, and stored or transmitted, and then checked or corrected when the data is read or received. | In [[data compression]], the goal is to find a shorter representation for a string, so that it requires fewer bits when stored or transmitted; the string would be compressed into the shorter representation before doing so, and then decompressed into its original form when read from storage or received. The field of [[algorithmic information theory]] is devoted to the study of the irreducible information content of a string (i.e., its shortest-possible representation length, in bits), under the assumption that the receiver has minimal ''a priori'' knowledge of the method used to compress the string. In [[error detection and correction]], the goal is to add redundant data to a string, to enable the detection or correction of errors during storage or transmission; the redundant data would be computed before doing so, and stored or transmitted, and then checked or corrected when the data is read or received. | ||
| Line 24: | Line 24: | ||
A bit can be stored by a digital device or other physical system that exists in either of two possible distinct [[state (computer science)|states]]. These may be the two stable states of a [[Flip-flop (electronics)|flip-flop]], two positions of an [[Switch|electrical switch]], two distinct [[voltage]] or [[electric current|current]] levels allowed by a [[electrical circuit|circuit]], two distinct levels of [[Irradiance|light intensity]], two directions of [[magnetism|magnetization]] or [[electrical polarity|polarization]], the orientation of reversible double stranded [[DNA]], etc. | A bit can be stored by a digital device or other physical system that exists in either of two possible distinct [[state (computer science)|states]]. These may be the two stable states of a [[Flip-flop (electronics)|flip-flop]], two positions of an [[Switch|electrical switch]], two distinct [[voltage]] or [[electric current|current]] levels allowed by a [[electrical circuit|circuit]], two distinct levels of [[Irradiance|light intensity]], two directions of [[magnetism|magnetization]] or [[electrical polarity|polarization]], the orientation of reversible double stranded [[DNA]], etc. | ||
Perhaps the earliest example of a binary storage device was the [[punched card]] invented by [[Basile Bouchon]] and Jean-Baptiste Falcon (1732), developed by [[Joseph Marie Jacquard]] (1804), and later adopted by [[Semyon Korsakov]], [[Charles Babbage]], [[Herman Hollerith]], and early computer manufacturers like [[IBM]]. A variant of that idea was the perforated [[paper tape]]. In all those systems, the medium (card or tape) conceptually carried an array of hole positions; each position could be either punched through or not, thus carrying one bit of information. The encoding of text by bits was also used in [[Morse code]] (1844) and early digital communications machines such as [[Teleprinter|teletypes]] | Perhaps the earliest example of a binary storage device was the [[punched card]] invented by [[Basile Bouchon]] and Jean-Baptiste Falcon (1732), developed by [[Joseph Marie Jacquard]] (1804), and later adopted by [[Semyon Korsakov]], [[Charles Babbage]], [[Herman Hollerith]], and early computer manufacturers like [[IBM]]. A variant of that idea was the perforated [[paper tape]]. In all those systems, the medium (card or tape) conceptually carried an array of hole positions; each position could be either punched through or not, thus carrying one bit of information. The encoding of text by bits was also used in [[Morse code]] (1844) and early digital communications machines such as [[Teleprinter|teletypes]] (1870). | ||
The first electrical devices for discrete logic (such as [[elevator]] and [[traffic light]] control [[Electronic circuit|circuits]], [[telephone switches]], and Konrad Zuse's computer) represented bits as the states of [[electrical relay]]s which could be either "open" or "closed". These relays functioned as mechanical switches, physically toggling between states to represent binary data, forming the fundamental building blocks of early computing and control systems. When relays were replaced by [[vacuum tube]]s, starting in the 1940s, computer builders experimented with a variety of storage methods, such as pressure pulses traveling down a [[mercury delay line]], charges stored on the inside surface of a [[cathode-ray tube]], or opaque spots printed on [[optical disc|glass discs]] by [[photolithographic]] techniques. | The first electrical devices for discrete logic (such as [[elevator]] and [[traffic light]] control [[Electronic circuit|circuits]], [[telephone switches]], and Konrad Zuse's computer) represented bits as the states of [[electrical relay]]s which could be either "open" or "closed". These relays functioned as mechanical switches, physically toggling between states to represent binary data, forming the fundamental building blocks of early computing and control systems. When relays were replaced by [[vacuum tube]]s, starting in the 1940s, computer builders experimented with a variety of storage methods, such as pressure pulses traveling down a [[mercury delay line]], charges stored on the inside surface of a [[cathode-ray tube]], or opaque spots printed on [[optical disc|glass discs]] by [[photolithographic]] techniques. | ||
| Line 54: | Line 54: | ||
=== Multiple bits === | === Multiple bits === | ||
{{redirect2|MBit|Tbit|the technical high school|MBIT|the international terminal in Los Angeles International Airport (LAX)|TBIT}} | {{redirect2|MBit|Tbit|the technical high school|MBIT|the international terminal in Los Angeles International Airport (LAX)|TBIT}} | ||
{{ | {| style="text-align: left; float:right; clear:right; margin-left:1em; border:1px #aaa solid; font-size:88%;" | ||
! colspan="2" style="background-color:#ccf;padding-left:3em;padding-right:1em;text-align:center;"|{{resize|120%|Multiple-bit units}} | |||
|- | |||
| style="padding:0" | | |||
{| style="border:1px #AAA solid" | |||
!colspan="4" style="background-color:#ccf;text-align:center;" | [[Decimal]] | |||
|- | |||
!style="background-color:#ddf;text-align:center;padding:0"| Value | |||
!colspan="2" style="background-color:#ddf;text-align:center;padding:0"| [[Metric prefix|Metric]] | |||
|- | |||
| 1000 | |||
| kbit || kilobit | |||
|- | |||
| 1000<sup>2</sup> | |||
| Mbit || megabit | |||
|- | |||
| 1000<sup>3</sup> | |||
| Gbit || gigabit | |||
|- | |||
| 1000<sup>4</sup> | |||
| Tbit || terabit | |||
|- | |||
| 1000<sup>5</sup> | |||
| Pbit || petabit | |||
|- | |||
| 1000<sup>6</sup> | |||
| Ebit || exabit | |||
|- | |||
| 1000<sup>7</sup> | |||
| Zbit || zettabit | |||
|- | |||
| 1000<sup>8</sup> | |||
| Ybit || yottabit | |||
|- | |||
| 1000<sup>9</sup> | |||
| Rbit || ronnabit | |||
|- | |||
| 1000<sup>10</sup> | |||
| Qbit || quettabit | |||
|} | |||
| | |||
{| style="border:1px #AAA solid" | |||
!colspan="6" style="background-color:#ccf;text-align:center;" | [[Binary prefix|Binary]] | |||
|- | |||
!style="background-color:#ddf;padding:0; text-align: center"| Value | |||
!colspan="2" style="background-color:#ddf;text-align:center;padding:0"| [[IEC 80000-13|IEC]] | |||
!colspan="3" style="background-color:#ddf;text-align:center;padding:0"| [[JEDEC memory standards#Unit prefixes for semiconductor storage capacity|{{tooltip|Memory|Size of directly addressable memory is historically denoted with Kbit, Mbit, etc., to mean binary multiples.}}]] | |||
|- | |||
| 1024 | |||
| Kibit || kibibit | |||
| Kbit || Kb || kilobit | |||
|- | |||
| 1024<sup>2</sup> | |||
| Mibit || mebibit | |||
| Mbit || Mb || megabit | |||
|- | |||
| 1024<sup>3</sup> | |||
| Gibit || gibibit | |||
| Gbit || Gb || gigabit | |||
|- | |||
| 1024<sup>4</sup> | |||
| Tibit || tebibit | |||
| colspan="3" style="text-align:center;" | — | |||
|- | |||
| 1024<sup>5</sup> | |||
| Pibit || pebibit | |||
| colspan="3" style="text-align:center;" | — | |||
|- | |||
| 1024<sup>6</sup> | |||
| Eibit || exbibit | |||
| colspan="3" style="text-align:center;" | — | |||
|- | |||
| 1024<sup>7</sup> | |||
| Zibit || zebibit | |||
| colspan="3" style="text-align:center;" | — | |||
|- | |||
| 1024<sup>8</sup> | |||
| Yibit || yobibit | |||
| colspan="3" style="text-align:center;" | — | |||
|- | |||
| 1024<sup>9</sup> | |||
| Ribit || robibit | |||
| colspan="3" style="text-align:center;" | — | |||
|- | |||
| 1024<sup>10</sup> | |||
| Qibit || quebibit | |||
| colspan="6" style="text-align:center;" | — | |||
|} | |||
|- | |||
|colspan="6" style="background-color:#ddf;text-align:center;"| [[Orders of magnitude (data)|Orders of magnitude of data]] | |||
|} | |||
Multiple bits may be expressed and represented in several ways. For convenience of representing commonly reoccurring groups of bits in information technology, several [[units of information]] have traditionally been used. The most common is the unit [[byte]], coined by [[Werner Buchholz]] in June 1956, which historically was used to represent the group of bits used to encode a single [[character (computing)|character]] of text (until [[UTF-8]] multibyte encoding took over) in a computer<ref name="Bemer_2000"/><ref name="Buchholz_1956"/><ref name="Buchholz_1977"/><ref name="Buchholz_1962"/><ref name="Bemer_1959"/> and for this reason it was used as the basic [[address space|addressable]] element in many [[computer architecture]]s. By 1993, the trend in hardware design had converged on the 8-bit [[byte]].<ref>{{cite web |title=ISO/IEC 2382-1:1993(en) Information technology — Vocabulary — Part 1: Fundamental terms |url=https://www.iso.org/obp/ui/#iso:std:iso-iec:2382:-1:ed-3:v1:en |access-date=8 January 2025 |page=01.02.09}}</ref> However, because of the ambiguity of relying on the underlying hardware design, the unit [[Octet (computing)|octet]] was defined to explicitly denote a sequence of eight bits. | Multiple bits may be expressed and represented in several ways. For convenience of representing commonly reoccurring groups of bits in information technology, several [[units of information]] have traditionally been used. The most common is the unit [[byte]], coined by [[Werner Buchholz]] in June 1956, which historically was used to represent the group of bits used to encode a single [[character (computing)|character]] of text (until [[UTF-8]] multibyte encoding took over) in a computer<ref name="Bemer_2000"/><ref name="Buchholz_1956"/><ref name="Buchholz_1977"/><ref name="Buchholz_1962"/><ref name="Bemer_1959"/> and for this reason it was used as the basic [[address space|addressable]] element in many [[computer architecture]]s. By 1993, the trend in hardware design had converged on the 8-bit [[byte]].<ref>{{cite web |title=ISO/IEC 2382-1:1993(en) Information technology — Vocabulary — Part 1: Fundamental terms |url=https://www.iso.org/obp/ui/#iso:std:iso-iec:2382:-1:ed-3:v1:en |access-date=8 January 2025 |page=01.02.09}}</ref> However, because of the ambiguity of relying on the underlying hardware design, the unit [[Octet (computing)|octet]] was defined to explicitly denote a sequence of eight bits. | ||
Computers usually manipulate bits in groups of a fixed size, conventionally named "[[Word (computer architecture)|words]]". Like the byte, the number of bits in a word also varies with the hardware design, and is typically between 8 and 80 bits, or even more in some specialized computers. In the early 21st century, retail personal or server computers have a word size of 32 or 64 bits. | Computers usually manipulate bits in groups of a fixed size, conventionally named "[[Word (computer architecture)|words]]". Like the byte, the number of bits in a word also varies with the hardware design, and is typically between 8 and 80 bits, or even more in some specialized computers. In the early 21st century, retail personal or server computers have a word size of 32 or 64 bits. | ||
The [[International System of Units]] defines a series of decimal prefixes for multiples of standardized units which are commonly also used with the bit and the byte. The prefixes [[kilo-|kilo]] (10<sup>3</sup>) through [[ | The [[International System of Units]] defines a series of decimal prefixes for multiples of standardized units which are commonly also used with the bit and the byte. The prefixes [[kilo-|kilo]] (10<sup>3</sup>) through [[quetta-|quetta]] (10<sup>30</sup>) increment by multiples of one thousand, and the corresponding units are the [[kilobit]] (kbit) through the [[quettabit]] (Qbit). | ||
== See also == | == See also == | ||
| Line 75: | Line 165: | ||
== References == | == References == | ||
<references> | |||
<ref name="Mackenzie_1980">{{cite book |url=https://textfiles.meulie.net/bitsaved/Books/Mackenzie_CodedCharSets.pdf |title=Coded Character Sets, History and Development |series=The Systems Programming Series |author-last=Mackenzie |author-first=Charles E. |date=1980 |edition=1 |publisher=[[Addison-Wesley Publishing Company, Inc.]] |isbn=978-0-201-14460-4 |lccn=77-90165 |page=x |access-date=2019-08-25 |archive-url=https://web.archive.org/web/20160526172151/https://textfiles.meulie.net/bitsaved/Books/Mackenzie_CodedCharSets.pdf |archive-date=May 26, 2016 |url-status=live |df=mdy-all }}</ref> | <ref name="Mackenzie_1980">{{cite book |url=https://textfiles.meulie.net/bitsaved/Books/Mackenzie_CodedCharSets.pdf |title=Coded Character Sets, History and Development |series=The Systems Programming Series |author-last=Mackenzie |author-first=Charles E. |date=1980 |edition=1 |publisher=[[Addison-Wesley Publishing Company, Inc.]] |isbn=978-0-201-14460-4 |lccn=77-90165 |page=x |access-date=2019-08-25 |archive-url=https://web.archive.org/web/20160526172151/https://textfiles.meulie.net/bitsaved/Books/Mackenzie_CodedCharSets.pdf |archive-date=May 26, 2016 |url-status=live |df=mdy-all }}</ref> | ||
<ref name="Anderson_2006">{{citation |author-first1=John B. |author-last1=Anderson |author-first2=Rolf |author-last2=Johnnesson |date=2006 |title=Understanding Information Transmission}}</ref> | <ref name="Anderson_2006">{{citation |author-first1=John B. |author-last1=Anderson |author-first2=Rolf |author-last2=Johnnesson |date=2006 |title=Understanding Information Transmission}}</ref> | ||
| Line 92: | Line 182: | ||
<ref name="Buchholz_1962">{{anchor|Buchholz-1962}}{{citation |title=Planning a Computer System – Project Stretch |author-first1=Gerrit Anne |author-last1=Blaauw |author-link1=Gerrit Anne Blaauw |author-first2=Frederick Phillips |author-last2=Brooks, Jr. |author-link2=Frederick Phillips Brooks, Jr. |author-first3=Werner |author-last3=Buchholz |author-link3=Werner Buchholz |editor-first=Werner |editor-last=Buchholz |editor-link=Werner Buchholz |publisher=[[McGraw-Hill Book Company, Inc.]] / The Maple Press Company, York, PA. |lccn=61-10466 |date=1962 |chapter=Chapter 4: Natural Data Units |pages=39–40 |chapter-url=http://archive.computerhistory.org/resources/text/IBM/Stretch/pdfs/Buchholz_102636426.pdf |access-date=2017-04-03 |url-status=dead |archive-url=https://web.archive.org/web/20170403014651/http://archive.computerhistory.org/resources/text/IBM/Stretch/pdfs/Buchholz_102636426.pdf |archive-date=2017-04-03}}</ref> | <ref name="Buchholz_1962">{{anchor|Buchholz-1962}}{{citation |title=Planning a Computer System – Project Stretch |author-first1=Gerrit Anne |author-last1=Blaauw |author-link1=Gerrit Anne Blaauw |author-first2=Frederick Phillips |author-last2=Brooks, Jr. |author-link2=Frederick Phillips Brooks, Jr. |author-first3=Werner |author-last3=Buchholz |author-link3=Werner Buchholz |editor-first=Werner |editor-last=Buchholz |editor-link=Werner Buchholz |publisher=[[McGraw-Hill Book Company, Inc.]] / The Maple Press Company, York, PA. |lccn=61-10466 |date=1962 |chapter=Chapter 4: Natural Data Units |pages=39–40 |chapter-url=http://archive.computerhistory.org/resources/text/IBM/Stretch/pdfs/Buchholz_102636426.pdf |access-date=2017-04-03 |url-status=dead |archive-url=https://web.archive.org/web/20170403014651/http://archive.computerhistory.org/resources/text/IBM/Stretch/pdfs/Buchholz_102636426.pdf |archive-date=2017-04-03}}</ref> | ||
<ref name="Bemer_1959">{{cite journal |author-first=Robert William |author-last=Bemer |author-link=Robert William Bemer |title=A proposal for a generalized card code of 256 characters |journal=[[Communications of the ACM]] |volume=2 |number=9 |pages=19–23 |date=1959 |doi=10.1145/368424.368435|s2cid=36115735 |doi-access=free }}</ref> | <ref name="Bemer_1959">{{cite journal |author-first=Robert William |author-last=Bemer |author-link=Robert William Bemer |title=A proposal for a generalized card code of 256 characters |journal=[[Communications of the ACM]] |volume=2 |number=9 |pages=19–23 |date=1959 |doi=10.1145/368424.368435|s2cid=36115735 |doi-access=free }}</ref> | ||
</references> | |||
== External links == | == External links == | ||
Latest revision as of 12:51, 18 November 2025
Template:Short description Script error: No such module "about". Template:Use dmy dates Template:Fundamental info units
The bit is the most basic unit of information in computing and digital communication. The name is a portmanteau of binary digit.[1] The bit represents a logical state with one of two possible values. These values are most commonly represented as either "Template:Mono" or "Template:Mono", but other representations such as true/false, yes/no, on/off, or +/− are also widely used.
The relation between these values and the physical states of the underlying storage or device is a matter of convention, and different assignments may be used even within the same device or program. It may be physically implemented with a two-state device.
A contiguous group of binary digits is commonly called a bit string, a bit vector, or a single-dimensional (or multi-dimensional) bit array. A group of eight bits is called one byte, but historically the size of the byte is not strictly defined.[2] Frequently, half, full, double and quadruple words consist of a number of bytes which is a low power of two. A string of four bits is usually a nibble.
In information theory, one bit is the information entropy of a random binary variable that is Template:Mono or Template:Mono with equal probability,[3] or the information that is gained when the value of such a variable becomes known.[4][5] As a unit of information, the bit is also known as a shannon,[6] named after Claude E. Shannon. As a measure of the length of a digital string that is encoded as symbols over a binary alphabet (i.e. ), the bit has been called a binit,[7] but this usage is now rare.[8]
In data compression, the goal is to find a shorter representation for a string, so that it requires fewer bits when stored or transmitted; the string would be compressed into the shorter representation before doing so, and then decompressed into its original form when read from storage or received. The field of algorithmic information theory is devoted to the study of the irreducible information content of a string (i.e., its shortest-possible representation length, in bits), under the assumption that the receiver has minimal a priori knowledge of the method used to compress the string. In error detection and correction, the goal is to add redundant data to a string, to enable the detection or correction of errors during storage or transmission; the redundant data would be computed before doing so, and stored or transmitted, and then checked or corrected when the data is read or received.
The symbol for the binary digit is either "bit", per the IEC 80000-13:2008 standard, or the lowercase character "b", per the IEEE 1541-2002 standard. Use of the latter may create confusion with the capital "B" which is the international standard symbol for the byte.
History
Ralph Hartley suggested the use of a logarithmic measure of information in 1928.[9] Claude E. Shannon first used the word "bit" in his seminal 1948 paper "A Mathematical Theory of Communication".[10][11][12] He attributed its origin to John W. Tukey, who had written a Bell Labs memo on 9 January 1947 in which he contracted "binary information digit" to simply "bit".[10]
Physical representation
A bit can be stored by a digital device or other physical system that exists in either of two possible distinct states. These may be the two stable states of a flip-flop, two positions of an electrical switch, two distinct voltage or current levels allowed by a circuit, two distinct levels of light intensity, two directions of magnetization or polarization, the orientation of reversible double stranded DNA, etc.
Perhaps the earliest example of a binary storage device was the punched card invented by Basile Bouchon and Jean-Baptiste Falcon (1732), developed by Joseph Marie Jacquard (1804), and later adopted by Semyon Korsakov, Charles Babbage, Herman Hollerith, and early computer manufacturers like IBM. A variant of that idea was the perforated paper tape. In all those systems, the medium (card or tape) conceptually carried an array of hole positions; each position could be either punched through or not, thus carrying one bit of information. The encoding of text by bits was also used in Morse code (1844) and early digital communications machines such as teletypes (1870).
The first electrical devices for discrete logic (such as elevator and traffic light control circuits, telephone switches, and Konrad Zuse's computer) represented bits as the states of electrical relays which could be either "open" or "closed". These relays functioned as mechanical switches, physically toggling between states to represent binary data, forming the fundamental building blocks of early computing and control systems. When relays were replaced by vacuum tubes, starting in the 1940s, computer builders experimented with a variety of storage methods, such as pressure pulses traveling down a mercury delay line, charges stored on the inside surface of a cathode-ray tube, or opaque spots printed on glass discs by photolithographic techniques.
In the 1950s and 1960s, these methods were largely supplanted by magnetic storage devices such as magnetic-core memory, magnetic tapes, drums, and disks, where a bit was represented by the polarity of magnetization of a certain area of a ferromagnetic film, or by a change in polarity from one direction to the other. The same principle was later used in the magnetic bubble memory developed in the 1980s, and is still found in various magnetic strip items such as metro tickets and some credit cards.
In modern semiconductor memory, such as dynamic random-access memory or a solid-state drive, the two values of a bit are represented by two levels of electric charge stored in a capacitor or a floating-gate MOSFET. In certain types of programmable logic arrays and read-only memory, a bit may be represented by the presence or absence of a conducting path at a certain point of a circuit. In optical discs, a bit is encoded as the presence or absence of a microscopic pit on a reflective surface. In one-dimensional bar codes and two-dimensional QR codes, bits are encoded as lines or squares which may be either black or white.
In modern digital computing, bits are transformed in Boolean logic gates.
Transmission and processing
Bits are transmitted one at a time in serial transmission. By contrast, multiple bits are transmitted simultaneously in a parallel transmission. A serial computer processes information in either a bit-serial or a byte-serial fashion. From the standpoint of data communications, a byte-serial transmission is an 8-way parallel transmission with binary signalling.
In programming languages such as C, a bitwise operation operates on binary strings as though they are vectors of bits, rather than interpreting them as binary numbers.
Data transfer rates are usually measured in decimal SI multiples. For example, a channel capacity may be specified as 8 kbit/s = 1 kB/s.
Storage
File sizes are often measured in (binary) IEC multiples of bytes, for example 1 KiB = 1024 bytes = 8192 bits. Confusion may arise in cases where (for historic reasons) filesizes are specified with binary multipliers using the ambiguous prefixes K, M, and G rather than the IEC standard prefixes Ki, Mi, and Gi.[13]
Mass storage devices are usually measured in decimal SI multiples, for example 1 TB = bytes.
Confusingly, the storage capacity of a directly addressable memory device, such as a DRAM chip, or an assemblage of such chips on a memory module, is specified as a binary multiple—using the ambiguous prefix G rather than the IEC recommended Gi prefix. For example, a DRAM chip that is specified (and advertised) as having "1 GB" of capacity has bytes of capacity. As at 2022, the difference between the popular understanding of a memory system with "8 GB" of capacity, and the SI-correct meaning of "8 GB" was still causing difficulty to software designers.[14]
Unit and symbol
The bit is not defined in the International System of Units (SI). However, the International Electrotechnical Commission issued standard IEC 60027, which specifies that the symbol for binary digit should be 'bit', and this should be used in all multiples, such as 'kbit', for kilobit.[15] However, the lower-case letter 'b' is widely used as well and was recommended by the IEEE 1541 Standard (2002). In contrast, the upper case letter 'B' is the standard and customary symbol for byte.
Multiple bits
Script error: No such module "Redirect hatnote".
| Template:Resize | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Orders of magnitude of data | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Multiple bits may be expressed and represented in several ways. For convenience of representing commonly reoccurring groups of bits in information technology, several units of information have traditionally been used. The most common is the unit byte, coined by Werner Buchholz in June 1956, which historically was used to represent the group of bits used to encode a single character of text (until UTF-8 multibyte encoding took over) in a computer[2][16][17][18][19] and for this reason it was used as the basic addressable element in many computer architectures. By 1993, the trend in hardware design had converged on the 8-bit byte.[20] However, because of the ambiguity of relying on the underlying hardware design, the unit octet was defined to explicitly denote a sequence of eight bits.
Computers usually manipulate bits in groups of a fixed size, conventionally named "words". Like the byte, the number of bits in a word also varies with the hardware design, and is typically between 8 and 80 bits, or even more in some specialized computers. In the early 21st century, retail personal or server computers have a word size of 32 or 64 bits.
The International System of Units defines a series of decimal prefixes for multiples of standardized units which are commonly also used with the bit and the byte. The prefixes kilo (103) through quetta (1030) increment by multiples of one thousand, and the corresponding units are the kilobit (kbit) through the quettabit (Qbit).
See also
- Template:Annotated link
- Template:Annotated link
- Template:Annotated link
- Template:Annotated link
- Template:Annotated link
- Template:Annotated link
- Template:Annotated link
- Template:Annotated link
- Template:Annotated link (quantum bit)
- Template:Annotated link
- Trit – Template:Annotated link (ternary digit)
References
- ↑ Script error: No such module "citation/CS1".
- ↑ a b Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ IEEE Std 260.1-2004
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Template:Cite dictionary
- ↑ Script error: No such module "citation/CS1".
- ↑ a b Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ National Institute of Standards and Technology (2008), Guide for the Use of the International System of Units. Online version. Template:Webarchive
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "anchor".Script error: No such module "citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "citation/CS1".
External links
- Bit Calculator – a tool providing conversions between bit, byte, kilobit, kilobyte, megabit, megabyte, gigabit, gigabyte
- BitXByteConverter Template:Webarchive – a tool for computing file sizes, storage capacity, and digital information in various units
Template:Information units Template:Data types Template:Authority control