<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://debianws.lexgopc.com/wiki143/index.php?action=history&amp;feed=atom&amp;title=Decoding_methods</id>
	<title>Decoding methods - Revision history</title>
	<link rel="self" type="application/atom+xml" href="http://debianws.lexgopc.com/wiki143/index.php?action=history&amp;feed=atom&amp;title=Decoding_methods"/>
	<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Decoding_methods&amp;action=history"/>
	<updated>2026-04-30T22:00:15Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>http://debianws.lexgopc.com/wiki143/index.php?title=Decoding_methods&amp;diff=1408328&amp;oldid=prev</id>
		<title>imported&gt;WikiEditor50: Lowercase &quot;theorem&quot;</title>
		<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Decoding_methods&amp;diff=1408328&amp;oldid=prev"/>
		<updated>2025-03-12T06:19:22Z</updated>

		<summary type="html">&lt;p&gt;Lowercase &amp;quot;theorem&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{Short description|Algorithms to decode messages}}&lt;br /&gt;
{{use dmy dates|date=December 2020|cs1-dates=y}}&lt;br /&gt;
In [[coding theory]], &amp;#039;&amp;#039;&amp;#039;decoding&amp;#039;&amp;#039;&amp;#039; is the process of translating received messages into [[Code word (communication)|codewords]] of a given [[code]]. There have been many common methods of mapping messages to codewords. These are often used to recover messages sent over a [[noisy channel]], such as a [[binary symmetric channel]].&lt;br /&gt;
&lt;br /&gt;
==Notation==&lt;br /&gt;
&amp;lt;math&amp;gt;C \subset \mathbb{F}_2^n&amp;lt;/math&amp;gt; is considered a [[binary code]] with the length &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;; &amp;lt;math&amp;gt;x,y&amp;lt;/math&amp;gt; shall be elements of &amp;lt;math&amp;gt;\mathbb{F}_2^n&amp;lt;/math&amp;gt;; and &amp;lt;math&amp;gt;d(x,y)&amp;lt;/math&amp;gt; is the distance between those elements.&lt;br /&gt;
&lt;br /&gt;
==Ideal observer decoding==&lt;br /&gt;
One may be given the message &amp;lt;math&amp;gt;x \in \mathbb{F}_2^n&amp;lt;/math&amp;gt;, then &amp;#039;&amp;#039;&amp;#039;ideal observer decoding&amp;#039;&amp;#039;&amp;#039; generates the codeword &amp;lt;math&amp;gt;y \in C&amp;lt;/math&amp;gt;. The process results in this solution:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\mathbb{P}(y \mbox{ sent} \mid x \mbox{ received})&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, a person can choose the codeword &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; that is most likely to be received as the message &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; after transmission.&lt;br /&gt;
&lt;br /&gt;
===Decoding conventions===&lt;br /&gt;
Each codeword does not have an expected possibility: there may be more than one codeword with an equal likelihood of mutating into the received message. In such a case, the sender and receiver(s) must agree ahead of time on a decoding convention. Popular conventions include:&lt;br /&gt;
&lt;br /&gt;
:# Request that the codeword be resent{{snd}} [[automatic repeat-request]].&lt;br /&gt;
:# Choose any random codeword from the set of most likely codewords which is nearer to that.&lt;br /&gt;
:# If [[Concatenated error correction code|another code follows]], mark the ambiguous bits of the codeword as erasures and hope that the outer code disambiguates them&lt;br /&gt;
:# Report a decoding failure to the system&lt;br /&gt;
&lt;br /&gt;
==Maximum likelihood decoding==&lt;br /&gt;
{{Further|Maximum likelihood}}&lt;br /&gt;
&lt;br /&gt;
Given a received vector &amp;lt;math&amp;gt;x \in \mathbb{F}_2^n&amp;lt;/math&amp;gt; &amp;#039;&amp;#039;&amp;#039;[[maximum likelihood]] decoding&amp;#039;&amp;#039;&amp;#039; picks a codeword &amp;lt;math&amp;gt;y \in C&amp;lt;/math&amp;gt; that [[Optimization (mathematics)|maximize]]s&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\mathbb{P}(x \mbox{ received} \mid y \mbox{ sent})&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
that is, the codeword &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; that maximizes the probability that &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; was received, [[conditional probability|given that]] &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; was sent. If all codewords are equally likely to be sent then this scheme is equivalent to ideal observer decoding.&lt;br /&gt;
In fact, by [[Bayes&amp;#039; theorem]],&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{align}&lt;br /&gt;
\mathbb{P}(x \mbox{ received} \mid y \mbox{ sent}) &amp;amp; {} = \frac{ \mathbb{P}(x \mbox{ received} , y \mbox{ sent}) }{\mathbb{P}(y \mbox{ sent} )} \\&lt;br /&gt;
&amp;amp; {} = \mathbb{P}(y \mbox{ sent} \mid x \mbox{ received}) \cdot \frac{\mathbb{P}(x \mbox{ received})}{\mathbb{P}(y \mbox{ sent})}.&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Upon fixing &amp;lt;math&amp;gt;\mathbb{P}(x \mbox{ received})&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; is restructured and&lt;br /&gt;
&amp;lt;math&amp;gt;\mathbb{P}(y \mbox{ sent})&amp;lt;/math&amp;gt; is constant as all codewords are equally likely to be sent.&lt;br /&gt;
Therefore, &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\mathbb{P}(x \mbox{ received} \mid y \mbox{ sent}) &lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
is maximised as a function of the variable &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; precisely when&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\mathbb{P}(y \mbox{ sent}\mid x \mbox{ received} ) &lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
is maximised, and the claim follows.&lt;br /&gt;
&lt;br /&gt;
As with ideal observer decoding, a convention must be agreed to for non-unique decoding.&lt;br /&gt;
&lt;br /&gt;
The maximum likelihood decoding problem can also be modeled as an [[integer programming]] problem.&amp;lt;ref name=&amp;quot;Feldman_2005&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The maximum likelihood decoding algorithm is an instance of the &amp;quot;marginalize a product function&amp;quot; problem which is solved by applying the [[generalized distributive law]].&amp;lt;ref name=&amp;quot;Aji-McEliece_2000&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Minimum distance decoding==&lt;br /&gt;
Given a received codeword &amp;lt;math&amp;gt;x \in \mathbb{F}_2^n&amp;lt;/math&amp;gt;, &amp;#039;&amp;#039;&amp;#039;minimum distance decoding&amp;#039;&amp;#039;&amp;#039; picks a codeword &amp;lt;math&amp;gt;y \in C&amp;lt;/math&amp;gt; to minimise the [[Hamming distance]]:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;d(x,y) = \# \{i : x_i \not = y_i \}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
i.e. choose the codeword &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; that is as close as possible to &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Note that if the probability of error on a [[discrete memoryless channel]] &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is strictly less than one half, then &amp;#039;&amp;#039;minimum distance decoding&amp;#039;&amp;#039; is equivalent to &amp;#039;&amp;#039;maximum likelihood decoding&amp;#039;&amp;#039;, since if&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;d(x,y) = d,\,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
then:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{align}&lt;br /&gt;
\mathbb{P}(y \mbox{ received} \mid x \mbox{ sent}) &amp;amp; {} = (1-p)^{n-d} \cdot p^d \\&lt;br /&gt;
&amp;amp; {} = (1-p)^n \cdot \left( \frac{p}{1-p}\right)^d \\&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which (since &amp;#039;&amp;#039;p&amp;#039;&amp;#039; is less than one half) is maximised by minimising &amp;#039;&amp;#039;d&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
Minimum distance decoding is also known as &amp;#039;&amp;#039;nearest neighbour decoding&amp;#039;&amp;#039;. It can be assisted or automated by using a [[standard array]]. Minimum distance decoding is a reasonable decoding method when the following conditions are met:&lt;br /&gt;
&lt;br /&gt;
:#The probability &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; that an error occurs is independent of the position of the symbol.&lt;br /&gt;
:#Errors are independent events{{snd}} an error at one position in the message does not affect other positions.&lt;br /&gt;
&lt;br /&gt;
These assumptions may be reasonable for transmissions over a [[binary symmetric channel]]. They may be unreasonable for other media, such as a DVD, where a single scratch on the disk can cause an error in many neighbouring symbols or codewords.&lt;br /&gt;
&lt;br /&gt;
As with other decoding methods, a convention must be agreed to for non-unique decoding.&lt;br /&gt;
&lt;br /&gt;
==Syndrome decoding==&lt;br /&gt;
&amp;lt;!-- [[Syndrome decoding]] redirects here --&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Syndrome decoding&amp;#039;&amp;#039;&amp;#039; is a highly efficient method of decoding a [[linear code]] over a &amp;#039;&amp;#039;noisy channel&amp;#039;&amp;#039;, i.e. one on which errors are made. In essence, syndrome decoding is &amp;#039;&amp;#039;minimum distance decoding&amp;#039;&amp;#039; using a reduced lookup table. This is allowed by the linearity of the code.&amp;lt;ref name=&amp;quot;Beutelspacher-Rosenbaum_1998&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;C\subset \mathbb{F}_2^n&amp;lt;/math&amp;gt; is a linear code of length &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; and minimum distance &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; with [[parity-check matrix]] &amp;lt;math&amp;gt;H&amp;lt;/math&amp;gt;. Then clearly &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; is capable of correcting up to&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;t = \left\lfloor\frac{d-1}{2}\right\rfloor&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
errors made by the channel (since if no more than &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; errors are made then minimum distance decoding will still correctly decode the incorrectly transmitted codeword).&lt;br /&gt;
&lt;br /&gt;
Now suppose that a codeword &amp;lt;math&amp;gt;x \in \mathbb{F}_2^n&amp;lt;/math&amp;gt; is sent over the channel and the error pattern &amp;lt;math&amp;gt;e \in \mathbb{F}_2^n&amp;lt;/math&amp;gt; occurs. Then &amp;lt;math&amp;gt;z=x+e&amp;lt;/math&amp;gt; is received. Ordinary minimum distance decoding would lookup the vector &amp;lt;math&amp;gt;z&amp;lt;/math&amp;gt; in a table of size &amp;lt;math&amp;gt;|C|&amp;lt;/math&amp;gt; for the nearest match - i.e. an element (not necessarily unique) &amp;lt;math&amp;gt;c \in C&amp;lt;/math&amp;gt; with&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;d(c,z) \leq d(y,z)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
for all &amp;lt;math&amp;gt;y \in C&amp;lt;/math&amp;gt;. Syndrome decoding takes advantage of the property of the parity matrix that:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;Hx = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
for all &amp;lt;math&amp;gt;x \in C&amp;lt;/math&amp;gt;. The &amp;#039;&amp;#039;syndrome&amp;#039;&amp;#039; of the received &amp;lt;math&amp;gt;z=x+e&amp;lt;/math&amp;gt; is defined to be:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;Hz = H(x+e) =Hx + He = 0 + He = He&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To perform [[#Maximum_likelihood_decoding|ML decoding]] in a [[binary symmetric channel]], one has to look-up a precomputed table of size &amp;lt;math&amp;gt;2^{n-k}&amp;lt;/math&amp;gt;, mapping &amp;lt;math&amp;gt;He&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;e&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Note that this is already of significantly less complexity than that of a [[standard array|standard array decoding]].&lt;br /&gt;
&lt;br /&gt;
However, under the assumption that no more than &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; errors were made during transmission, the receiver can look up the value &amp;lt;math&amp;gt;He&amp;lt;/math&amp;gt; in a further reduced table of size&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{matrix}&lt;br /&gt;
\sum_{i=0}^t \binom{n}{i}\\&lt;br /&gt;
\end{matrix}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== List decoding ==&lt;br /&gt;
{{Main|List decoding}}&lt;br /&gt;
&lt;br /&gt;
== Information set decoding ==&lt;br /&gt;
&lt;br /&gt;
This is a family of [[Las Vegas algorithm|Las Vegas]]-probabilistic methods all based on the observation that it is easier to guess enough error-free positions, than it is to guess all the error-positions.&lt;br /&gt;
&lt;br /&gt;
The simplest form is due to Prange: Let &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; be the &amp;lt;math&amp;gt;k \times n&amp;lt;/math&amp;gt; generator matrix of &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; used for encoding. Select &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; columns of &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; at random, and denote by &amp;lt;math&amp;gt;G&amp;#039;&amp;lt;/math&amp;gt; the corresponding submatrix of &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;. With reasonable probability &amp;lt;math&amp;gt;G&amp;#039;&amp;lt;/math&amp;gt; will have full rank, which means that if we let &amp;lt;math&amp;gt;c&amp;#039;&amp;lt;/math&amp;gt; be the sub-vector for the corresponding positions of any codeword &amp;lt;math&amp;gt;c = mG&amp;lt;/math&amp;gt; of &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; for a message &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt;, we can recover &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; as &amp;lt;math&amp;gt;m = c&amp;#039; G&amp;#039;^{-1}&amp;lt;/math&amp;gt;. Hence, if we were lucky that these &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; positions of the received word &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; contained no errors, and hence equalled the positions of the sent codeword, then we may decode.&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; errors occurred, the probability of such a fortunate selection of columns is given by &amp;lt;math&amp;gt;\textstyle\binom{n-t}{k}/\binom{n}{k}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
This method has been improved in various ways, e.g. by Stern&amp;lt;ref name=&amp;quot;Stern_1989&amp;quot;/&amp;gt; and [[Anne Canteaut|Canteaut]] and Sendrier.&amp;lt;ref name=&amp;quot;Ohta_1998&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Partial response maximum likelihood==&lt;br /&gt;
{{Main|PRML}}&lt;br /&gt;
&lt;br /&gt;
Partial response maximum likelihood ([[PRML]]) is a method for converting the weak analog signal from the head of a magnetic disk or tape drive into a digital signal.&lt;br /&gt;
&lt;br /&gt;
==Viterbi decoder==&lt;br /&gt;
{{Main|Viterbi decoder}}&lt;br /&gt;
&lt;br /&gt;
A Viterbi decoder uses the Viterbi algorithm for decoding a bitstream that has been encoded using [[forward error correction]] based on a convolutional code.&lt;br /&gt;
The [[Hamming distance]] is used as a metric for hard decision Viterbi decoders. The &amp;#039;&amp;#039;squared&amp;#039;&amp;#039; [[Euclidean distance]] is used as a metric for soft decision decoders.&lt;br /&gt;
&lt;br /&gt;
== Optimal decision decoding algorithm (ODDA) ==&lt;br /&gt;
Optimal decision decoding algorithm (ODDA) for an asymmetric TWRC system.{{Clarify|date=January 2023}}&amp;lt;ref&amp;gt;{{Citation |title= Optimal decision decoding algorithm (ODDA) for an asymmetric TWRC system; |author1=Siamack Ghadimi|publisher=Universal Journal of Electrical and Electronic Engineering|date=2020}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==See also==&lt;br /&gt;
* [[Don&amp;#039;t care alarm]]&lt;br /&gt;
* [[Error detection and correction]]&lt;br /&gt;
* [[Forbidden input]]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
{{reflist|refs=&lt;br /&gt;
&amp;lt;ref name=&amp;quot;Feldman_2005&amp;quot;&amp;gt;{{cite journal |title=Using Linear Programming to Decode Binary Linear Codes |first1=Jon |last1=Feldman |first2=Martin J. |last2=Wainwright |first3=David R. |last3=Karger |journal=[[IEEE Transactions on Information Theory]] |volume=51 |issue=3 |pages=954–972 |date=March 2005 |doi=10.1109/TIT.2004.842696|s2cid=3120399 |citeseerx=10.1.1.111.6585 }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;Beutelspacher-Rosenbaum_1998&amp;quot;&amp;gt;{{cite book |author-first1=Albrecht |author-last1=Beutelspacher |author-link1=Albrecht Beutelspacher |author-first2=Ute |author-last2=Rosenbaum |date=1998 |title=Projective Geometry |page=190 |publisher=[[Cambridge University Press]] |isbn=0-521-48277-1}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;Aji-McEliece_2000&amp;quot;&amp;gt;{{cite journal |last1=Aji |first1=Srinivas M. |last2=McEliece |first2=Robert J. |title=The Generalized Distributive Law |journal=[[IEEE Transactions on Information Theory]] |date=March 2000 |volume=46 |issue=2 |pages=325–343 |doi=10.1109/18.825794 |url=https://authors.library.caltech.edu/1541/1/AJIieeetit00.pdf}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;Stern_1989&amp;quot;&amp;gt;{{cite book |author-first=Jacques |author-last=Stern |date=1989 |chapter=A method for finding codewords of small weight |title=Coding Theory and Applications |series=Lecture Notes in Computer Science |publisher=[[Springer-Verlag]] |volume=388 |pages=106–113 |doi=10.1007/BFb0019850 |isbn=978-3-540-51643-9}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;Ohta_1998&amp;quot;&amp;gt;{{cite book |date=1998 |editor-last1=Ohta |editor-first1=Kazuo |editor-first2=Dingyi |editor-last2=Pei |series=Lecture Notes in Computer Science |volume=1514 |pages=187–199 |doi=10.1007/3-540-49649-1 |isbn=978-3-540-65109-3 |s2cid=37257901 |title=Advances in Cryptology — ASIACRYPT&amp;#039;98 }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Further reading==&lt;br /&gt;
* {{cite book |author-last=Hill |author-first=Raymond |title=A first course in coding theory |publisher=[[Oxford University Press]] |series=Oxford Applied Mathematics and Computing Science Series |date=1986 |isbn=978-0-19-853803-5 |url-access=registration |url=https://archive.org/details/firstcourseincod0000hill}}&lt;br /&gt;
* {{cite book |author-last=Pless |author-first=Vera |author-link=Vera Pless |title=Introduction to the theory of error-correcting codes |title-link=Introduction to the Theory of Error-Correcting Codes |publisher=[[John Wiley &amp;amp; Sons]] |series=Wiley-Interscience Series in Discrete Mathematics |date=1982 |isbn=978-0-471-08684-0}}&lt;br /&gt;
* {{cite book |author-first=Jacobus H. |author-last=van Lint |title=Introduction to Coding Theory |edition=2 |publisher=[[Springer-Verlag]] |series=[[Graduate Texts in Mathematics]] (GTM) |volume=86 |date=1992 |isbn=978-3-540-54894-2 |url-access=registration |url=https://archive.org/details/introductiontoco0000lint}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Coding theory]]&lt;/div&gt;</summary>
		<author><name>imported&gt;WikiEditor50</name></author>
	</entry>
</feed>