<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://debianws.lexgopc.com/wiki143/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=176.61.2.76</id>
	<title>wiki143 - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="http://debianws.lexgopc.com/wiki143/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=176.61.2.76"/>
	<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Special:Contributions/176.61.2.76"/>
	<updated>2026-05-14T19:34:00Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>http://debianws.lexgopc.com/wiki143/index.php?title=Quadratic_Gauss_sum&amp;diff=1709858</id>
		<title>Quadratic Gauss sum</title>
		<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Quadratic_Gauss_sum&amp;diff=1709858"/>
		<updated>2025-06-25T06:42:48Z</updated>

		<summary type="html">&lt;p&gt;176.61.2.76: /* Definition */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In [[number theory]], &#039;&#039;&#039;quadratic Gauss sums&#039;&#039;&#039; are certain finite sums of roots of unity. A quadratic Gauss sum can be interpreted as a linear combination of the values of the complex [[exponential function]] with coefficients given by a quadratic character; for a general character, one obtains a more general [[Gauss sum]]. These objects are named after [[Carl Friedrich Gauss]], who studied them extensively and applied them to [[quadratic reciprocity|quadratic]], [[cubic reciprocity|cubic]], and [[biquadratic reciprocity|biquadratic]] reciprocity laws.&lt;br /&gt;
&lt;br /&gt;
== Definition ==&lt;br /&gt;
&lt;br /&gt;
For an odd [[prime number]] {{mvar|p}} and an integer {{mvar|a}}, the &#039;&#039;&#039;quadratic Gauss sum&#039;&#039;&#039; {{math|&#039;&#039;g&#039;&#039;(&#039;&#039;a&#039;&#039;; &#039;&#039;p&#039;&#039;)}} is defined as&lt;br /&gt;
: &amp;lt;math&amp;gt; g(a;p) = \sum_{n=0}^{p-1}\zeta_p^{an^2},&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\zeta_p&amp;lt;/math&amp;gt; is a [[Primitive nth root of unity|primitive {{mvar|p}}th root of unity]], for example &amp;lt;math&amp;gt;\zeta_p=\exp(2\pi i/p)&amp;lt;/math&amp;gt;.&lt;br /&gt;
Equivalently, we can write this using the [[Legendre symbol]] as&lt;br /&gt;
: &amp;lt;math&amp;gt;g(a;p) = \sum_{n=0}^{p-1}\big(1+\left(\tfrac{n}{p}\right)\big)\,\zeta_p^{an}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For {{mvar|a}} divisible by {{mvar|p}}, and we have &amp;lt;math&amp;gt;\zeta_p^{an^2}=1&amp;lt;/math&amp;gt; and thus&lt;br /&gt;
: &amp;lt;math&amp;gt; g(a;p) = p.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For {{mvar|a}} not divisible by {{mvar|p}}, we have &amp;lt;math&amp;gt;\sum_{n=0}^{p-1} \zeta_p^{an} = 0&amp;lt;/math&amp;gt;, implying that&lt;br /&gt;
: &amp;lt;math&amp;gt;g(a;p) = \sum_{n=0}^{p-1}\left(\tfrac{n}{p}\right)\,\zeta_p^{an} = G(a,\left(\tfrac{\cdot}{p}\right)),&amp;lt;/math&amp;gt;&lt;br /&gt;
where&lt;br /&gt;
: &amp;lt;math&amp;gt;G(a,\chi)=\sum_{n=0}^{p-1}\chi(n)\,\zeta_p^{an}&amp;lt;/math&amp;gt;&lt;br /&gt;
is the [[Gauss sum]] defined for any character {{mvar|&#039;&#039;χ&#039;&#039;}} modulo {{mvar|p}}.&lt;br /&gt;
&lt;br /&gt;
== Properties ==&lt;br /&gt;
&lt;br /&gt;
* The value of the Gauss sum is an [[algebraic integer]] in the {{mvar|p}}th [[cyclotomic field]] &amp;lt;math&amp;gt;\mathbb{Q}(\zeta_p)&amp;lt;/math&amp;gt;.&lt;br /&gt;
* The evaluation of the Gauss sum for an integer {{mvar|a}} not divisible by a prime {{math|&#039;&#039;p&#039;&#039; &amp;gt; 2}} can be reduced to the case {{math|&#039;&#039;a&#039;&#039; {{=}} 1}}:&lt;br /&gt;
:: &amp;lt;math&amp;gt; g(a;p)=\left(\tfrac{a}{p}\right)g(1;p). &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The exact value of the Gauss sum for {{math|&#039;&#039;a&#039;&#039; {{=}} 1}} is given by the formula:&amp;lt;ref&amp;gt;M. Murty, S. Pathak, The Mathematics Student Vol. 86, Nos. 1-2, January-June (2017), xx-yy ISSN: 0025-5742 https://mast.queensu.ca/~murty/quadratic2.pdf&amp;lt;/ref&amp;gt;&lt;br /&gt;
:: &amp;lt;math&amp;gt; g(1;p) =\sum_{n=0}^{p-1}e^\frac{2\pi in^2}{p}=&lt;br /&gt;
\begin{cases} &lt;br /&gt;
(1+i)\sqrt{p} &amp;amp; \text{if}\ p\equiv 0 \pmod 4, \\ &lt;br /&gt;
\sqrt{p} &amp;amp; \text{if}\ p\equiv 1\pmod 4, \\ &lt;br /&gt;
0 &amp;amp; \text{if}\ p \equiv 2 \pmod 4, \\&lt;br /&gt;
i\sqrt{p} &amp;amp; \text{if}\ p\equiv 3\pmod 4. &lt;br /&gt;
\end{cases}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; Remark&lt;br /&gt;
In fact, the identity&lt;br /&gt;
: &amp;lt;math&amp;gt;g(1;p)^2=\left(\tfrac{-1}{p}\right)p&amp;lt;/math&amp;gt;&lt;br /&gt;
was easy to prove and led to one of Gauss&#039;s [[proofs of quadratic reciprocity]]. However, the determination of the &#039;&#039;sign&#039;&#039; of the Gauss sum turned out to be considerably more difficult: Gauss could only establish it after several years&#039; work. Later, [[Peter Gustav Lejeune Dirichlet|Dirichlet]], [[Leopold Kronecker|Kronecker]], [[Issai Schur|Schur]] and other mathematicians found different proofs.&lt;br /&gt;
&lt;br /&gt;
== Generalized quadratic Gauss sums ==&lt;br /&gt;
&lt;br /&gt;
Let {{math|&#039;&#039;a&#039;&#039;, &#039;&#039;b&#039;&#039;, &#039;&#039;c&#039;&#039;}} be [[natural numbers]]. The &#039;&#039;&#039;generalized quadratic Gauss sum&#039;&#039;&#039; {{math|&#039;&#039;G&#039;&#039;(&#039;&#039;a&#039;&#039;, &#039;&#039;b&#039;&#039;, &#039;&#039;c&#039;&#039;)}} is defined by&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;G(a,b,c)=\sum_{n=0}^{c-1} e^{2\pi i\frac{a n^2+bn}{c}}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The classical quadratic Gauss sum is the sum {{math|&#039;&#039;g&#039;&#039;(&#039;&#039;a&#039;&#039;, &#039;&#039;p&#039;&#039;) {{=}} &#039;&#039;G&#039;&#039;(&#039;&#039;a&#039;&#039;, 0, &#039;&#039;p&#039;&#039;)}}.&lt;br /&gt;
&lt;br /&gt;
; Properties&lt;br /&gt;
&lt;br /&gt;
*The Gauss sum {{math|&#039;&#039;G&#039;&#039;(&#039;&#039;a&#039;&#039;,&#039;&#039;b&#039;&#039;,&#039;&#039;c&#039;&#039;)}} depends only on the [[residue class]] of {{math|&#039;&#039;a&#039;&#039;}} and {{math|&#039;&#039;b&#039;&#039;}} modulo {{math|&#039;&#039;c&#039;&#039;}}.&lt;br /&gt;
*Gauss sums are [[multiplicative function|multiplicative]], i.e. given natural numbers {{math|&#039;&#039;a&#039;&#039;, &#039;&#039;b&#039;&#039;, &#039;&#039;c&#039;&#039;, &#039;&#039;d&#039;&#039;}} with {{math|[[greatest common divisor|gcd]](&#039;&#039;c&#039;&#039;, &#039;&#039;d&#039;&#039;) {{=}} 1}} one has&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;G(a,b,cd)=G(ac,b,d)G(ad,b,c).&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:This is a direct consequence of the [[Chinese remainder theorem]].&lt;br /&gt;
&lt;br /&gt;
*One has {{math|&#039;&#039;G&#039;&#039;(&#039;&#039;a&#039;&#039;, &#039;&#039;b&#039;&#039;, &#039;&#039;c&#039;&#039;) {{=}} 0}} if {{math|gcd(&#039;&#039;a&#039;&#039;, &#039;&#039;c&#039;&#039;) &amp;gt; 1}} except if {{math|gcd(&#039;&#039;a&#039;&#039;,&#039;&#039;c&#039;&#039;)}} divides {{math|&#039;&#039;b&#039;&#039;}} in which case one has&lt;br /&gt;
::&amp;lt;math&amp;gt;G(a,b,c)= \gcd(a,c) \cdot G\left(\frac{a}{\gcd(a,c)},\frac{b}{\gcd(a,c)},\frac{c}{\gcd(a,c)}\right)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
:Thus in the evaluation of quadratic Gauss sums one may always assume {{math|gcd(&#039;&#039;a&#039;&#039;, &#039;&#039;c&#039;&#039;) {{=}} 1}}.&lt;br /&gt;
&lt;br /&gt;
*Let {{math|&#039;&#039;a&#039;&#039;, &#039;&#039;b&#039;&#039;, &#039;&#039;c&#039;&#039;}} be integers with {{math|&#039;&#039;ac&#039;&#039; ≠ 0}} and {{math|&#039;&#039;ac&#039;&#039; + &#039;&#039;b&#039;&#039;}} even. One has the following analogue of the [[quadratic reciprocity]] law for (even more general) Gauss sums&amp;lt;ref&amp;gt;Theorem 1.2.2 in B. C. Berndt, R. J. Evans, K. S. Williams, &#039;&#039;Gauss and Jacobi Sums&#039;&#039;, John Wiley and Sons, (1998).&amp;lt;/ref&amp;gt;&lt;br /&gt;
::&amp;lt;math&amp;gt;\sum_{n=0}^{|c|-1} e^{\pi i \frac{a n^2+bn}{c}} = \left|\frac{c}{a}\right|^\frac12 e^{\pi i \frac{|ac|-b^2}{4ac}} \sum_{n=0}^{|a|-1} e^{-\pi i \frac{c n^2+b n}{a}}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
*Define&lt;br /&gt;
::&amp;lt;math&amp;gt; \varepsilon_m = \begin{cases} 1 &amp;amp; \text{if}\ m\equiv 1\pmod 4 \\ i &amp;amp; \text{if}\ m\equiv 3\pmod 4 \end{cases}&amp;lt;/math&amp;gt;&lt;br /&gt;
:for every odd integer {{math|&#039;&#039;m&#039;&#039;}}. The values of Gauss sums with {{math|&#039;&#039;b&#039;&#039; {{=}} 0}} and {{math|gcd(&#039;&#039;a&#039;&#039;, &#039;&#039;c&#039;&#039;) {{=}} 1}} are explicitly given by&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;G(a,c) = G(a,0,c) =&lt;br /&gt;
\begin{cases}&lt;br /&gt;
0 &amp;amp; \text{if}\ c\equiv 2\pmod 4 \\ &lt;br /&gt;
\varepsilon_c \sqrt{c} \left(\dfrac{a}{c}\right) &amp;amp; \text{if}\ c\equiv 1\pmod 2 \\ &lt;br /&gt;
(1+i) \varepsilon_a^{-1} \sqrt{c} \left(\dfrac{c}{a}\right) &amp;amp; \text{if}\ c\equiv 0\pmod 4.&lt;br /&gt;
\end{cases}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Here {{math|({{sfrac|&#039;&#039;a&#039;&#039;|&#039;&#039;c&#039;&#039;}})}} is the [[Jacobi symbol]]. This is the famous formula of [[Carl Friedrich Gauss]].&lt;br /&gt;
&lt;br /&gt;
* For {{math|&#039;&#039;b&#039;&#039; &amp;gt; 0}} the Gauss sums can easily be computed by [[completing the square]] in most cases. This fails however in some cases (for example, {{math|&#039;&#039;c&#039;&#039;}} even and {{math|&#039;&#039;b&#039;&#039;}} odd), which can be computed relatively easy by other means. For example, if {{math|&#039;&#039;c&#039;&#039;}} is odd and {{math|gcd(&#039;&#039;a&#039;&#039;, &#039;&#039;c&#039;&#039;) {{=}} 1}} one has&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;G(a,b,c) =  \varepsilon_c \sqrt{c} \cdot \left(\frac{a}{c}\right) e^{-2\pi i \frac{\psi(a) b^2}{c}},&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:where {{math|&#039;&#039;ψ&#039;&#039;(&#039;&#039;a&#039;&#039;)}} is some number with {{math|4&#039;&#039;ψ&#039;&#039;(&#039;&#039;a&#039;&#039;)&#039;&#039;a&#039;&#039; ≡ 1 (mod &#039;&#039;c&#039;&#039;)}}. As another example, if 4 divides {{mvar|c}} and {{mvar|b}} is odd and as always {{math|gcd(&#039;&#039;a&#039;&#039;, &#039;&#039;c&#039;&#039;) {{=}} 1}} then {{math|&#039;&#039;G&#039;&#039;(&#039;&#039;a&#039;&#039;, &#039;&#039;b&#039;&#039;, &#039;&#039;c&#039;&#039;) {{=}} 0}}. This can, for example, be proved as follows: because of the multiplicative property of Gauss sums we only have to show that {{math|&#039;&#039;G&#039;&#039;(&#039;&#039;a&#039;&#039;, &#039;&#039;b&#039;&#039;, 2&amp;lt;sup&amp;gt;&#039;&#039;m&#039;&#039;&amp;lt;/sup&amp;gt;) {{=}} 0}} if {{math|&#039;&#039;n&#039;&#039; &amp;gt; 1}} and {{math|&#039;&#039;a&#039;&#039;, &#039;&#039;b&#039;&#039;}} are odd with {{math|gcd(&#039;&#039;a&#039;&#039;, &#039;&#039;c&#039;&#039;) {{=}} 1}}. If {{mvar|b}} is odd then {{math|&#039;&#039;an&#039;&#039;&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; + &#039;&#039;bn&#039;&#039;}} is even for all {{math|0 ≤ &#039;&#039;n&#039;&#039; &amp;lt; &#039;&#039;c&#039;&#039; − 1}}. For every {{mvar|q}}, the equation {{math|&#039;&#039;an&#039;&#039;&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; + &#039;&#039;bn&#039;&#039; + &#039;&#039;q&#039;&#039; {{=}} 0}} has at most two solutions in {{math|&#039;&#039;&#039;&amp;lt;math&amp;gt;\mathbb{Z}&amp;lt;/math&amp;gt;&#039;&#039;&#039;/2&amp;lt;sup&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;/sup&amp;gt;&#039;&#039;&#039;&amp;lt;math&amp;gt;\mathbb{Z}&amp;lt;/math&amp;gt;&#039;&#039;&#039;}}. Indeed, if &amp;lt;math&amp;gt;n_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n_2&amp;lt;/math&amp;gt; are two solutions of same parity, then &amp;lt;math&amp;gt;(n_1 - n_2)(a(n_1 + n_2) +b) = \alpha 2^m&amp;lt;/math&amp;gt; for some integer &amp;lt;math&amp;gt;\alpha&amp;lt;/math&amp;gt;, but &amp;lt;math&amp;gt;(a(j_1 + j_2) +b)&amp;lt;/math&amp;gt; is odd, hence &amp;lt;math&amp;gt;j_1 \equiv j_2 \pmod{2^m}&amp;lt;/math&amp;gt;. {{huh|date=July 2022}}&amp;lt;!-- Is this Z/(2^(mZ)) or Z/((2^m)Z) which would just be 1/2^n? Use LaTeX markup to disambiguate. --&amp;gt; Because of a counting argument {{math|&#039;&#039;an&#039;&#039;&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; + &#039;&#039;bn&#039;&#039;}} runs through all even residue classes modulo {{mvar|c}} exactly two times. The [[geometric sum]] formula then shows that {{math|&#039;&#039;G&#039;&#039;(&#039;&#039;a&#039;&#039;, &#039;&#039;b&#039;&#039;, 2&amp;lt;sup&amp;gt;&#039;&#039;m&#039;&#039;&amp;lt;/sup&amp;gt;) {{=}} 0}}.&lt;br /&gt;
&lt;br /&gt;
*If {{mvar|c}} is an odd [[square-free integer]] and {{math|gcd(&#039;&#039;a&#039;&#039;, &#039;&#039;c&#039;&#039;) {{=}} 1}}, then&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;G(a,0,c) = \sum_{n=0}^{c-1} \left(\frac{n}{c}\right) e^\frac{2\pi i a n}{c}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:If {{mvar|c}} is not squarefree then the right side vanishes while the left side does not. Often the right sum is also called a quadratic Gauss sum.&lt;br /&gt;
&lt;br /&gt;
*Another useful formula&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;math&amp;gt;G\left(n,p^k\right) = p\cdot G\left(n,p^{k-2}\right)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:holds for {{math|&#039;&#039;k&#039;&#039; ≥ 2}} and an odd prime number {{mvar|p}}, and for {{math|&#039;&#039;k&#039;&#039; ≥ 4}} and {{math|&#039;&#039;p&#039;&#039; {{=}} 2}}.&lt;br /&gt;
&lt;br /&gt;
==See also==&lt;br /&gt;
&lt;br /&gt;
*[[Gauss sum]]&lt;br /&gt;
*[[Gaussian period]]&lt;br /&gt;
*[[Kummer sum]]&lt;br /&gt;
*[[Landsberg–Schaar relation]]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
{{reflist}}&lt;br /&gt;
*{{cite book | last1 = Ireland | last2 = Rosen | title = A Classical Introduction to Modern Number Theory | publisher = Springer-Verlag | year = 1990 | isbn=0-387-97329-X }}&lt;br /&gt;
*{{cite book | first1 = Bruce C. | last1 = Berndt | first2 = Ronald J. | last2 = Evans | first3 = Kenneth S. | last3 = Williams | title = Gauss and Jacobi Sums | publisher = Wiley and Sons | year = 1998 | isbn=0-471-12807-4 }}&lt;br /&gt;
*{{cite book | first1 = Henryk | last1 = Iwaniec | first2 = Emmanuel | last2 = Kowalski | title = Analytic number theory | publisher = American Mathematical Society | year = 2004 | isbn=0-8218-3633-1}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cyclotomic fields]]&lt;/div&gt;</summary>
		<author><name>176.61.2.76</name></author>
	</entry>
	<entry>
		<id>http://debianws.lexgopc.com/wiki143/index.php?title=Microsoft_Write&amp;diff=2333956</id>
		<title>Microsoft Write</title>
		<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Microsoft_Write&amp;diff=2333956"/>
		<updated>2025-05-31T17:34:20Z</updated>

		<summary type="html">&lt;p&gt;176.61.2.76: /* How to open Microsoft Write documents (.wri) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Short description|Basic word processor formerly included with Microsoft Windows}}&lt;br /&gt;
{{Distinguish|Windows Live Writer}}&lt;br /&gt;
{{Redirect|write.exe|the version included since Windows 95|WordPad}}&lt;br /&gt;
{{Use mdy dates|date=October 2018}}&lt;br /&gt;
{{Infobox software&lt;br /&gt;
| name = Microsoft Write&lt;br /&gt;
| logo = Windows Write icon.png&lt;br /&gt;
| logo_size = x64px&lt;br /&gt;
| screenshot = Windows Write.png&lt;br /&gt;
| screenshot_size = 300px&lt;br /&gt;
| caption = Screenshot of Windows Write&lt;br /&gt;
| developer = [[Microsoft]]&lt;br /&gt;
| released = 1985&lt;br /&gt;
| replaced_by = [[WordPad]]&lt;br /&gt;
| operating system = [[Microsoft Windows]]&lt;br /&gt;
| genre = [[Word processor]]&lt;br /&gt;
}}&lt;br /&gt;
&#039;&#039;&#039;Microsoft Write&#039;&#039;&#039; is a basic [[word processor]]&amp;lt;ref&amp;gt;{{Cite web |last=Nistor |first=Codrut |date=2024-10-08 |title=Another one bites the dust: Microsoft buries WordPad, but there is an afterlife |url=https://www.notebookcheck.net/Another-one-bites-the-dust-Microsoft-buries-WordPad-but-there-is-an-afterlife.898769.0.html |access-date=2024-12-06 |website=Notebookcheck |language=en}}&amp;lt;/ref&amp;gt; included with [[Windows 1.0]]&amp;lt;ref&amp;gt;{{Cite web |last=Gilang |first=Rafly |date=2023-11-20 |title=ON THIS DAY: Windows 1.0, Microsoft&#039;s first major release and longest-supported OS to this day, launched to the market |url=https://mspoweruser.com/windows-1-0-released-on-this-day-november-20-1985/ |access-date=2024-12-06 |website=MSPoweruser |language=en-US}}&amp;lt;/ref&amp;gt; and later, until [[Windows NT 3.51]]. Throughout its lifespan, it was minimally updated. &amp;quot;Microsoft Write&amp;quot; also shares the name of a commercial retail release of [[Microsoft Word]] for the [[Apple Macintosh]] and [[Atari ST]] which is otherwise separate from this program.&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;{{Cite web |date=2024-12-05 |title=A quick look back at WordPad, the free word processor that Microsoft just killed |url=https://www.neowin.net/news/a-quick-look-back-at-wordpad-the-free-word-processor-that-microsoft-just-killed/ |access-date=2024-12-06 |website=Neowin |language=en}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Early versions of Write only work with Write Document (.wri) files, which are a subset of the [[Rich Text Format]] (RTF).&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt; After [[Windows 3.0]], Write became capable of reading and composing early [[Doc (computing)|Word Document (.doc)]] files. With Windows 3.1, Write became [[Object Linking and Embedding|OLE]] capable. In [[Windows 95]], Write was replaced with [[WordPad]];&amp;lt;ref&amp;gt;{{Cite web |last=Rashid |first=Dua |date=2024-01-06 |title=RIP Microsoft WordPad. You Will Be Missed |url=https://gizmodo.com/microsoft-wordpad-gone-windows-11-1851144100 |access-date=2024-12-06 |website=Gizmodo |language=en-US}}&amp;lt;/ref&amp;gt; attempting to open Write from the Windows folder will open WordPad instead. The executable for Microsoft Write (&amp;lt;code&amp;gt;write.exe&amp;lt;/code&amp;gt;) still remains in later versions of Windows, however it is simply a [[Method stub|compatibility stub]] that launches WordPad.&lt;br /&gt;
&lt;br /&gt;
Being a word processor, Write features additional document formatting features that are not found in [[Notepad (Windows)|Notepad]] (a simple [[text editor]]), such as a choice of font, text decorations and paragraph indentation for different parts of the document. Unlike versions of WordPad before Windows 7, Write could [[Justification (typesetting)|justify a paragraph]]. Write is comparable to early versions of [[MacWrite]]. &lt;br /&gt;
&lt;br /&gt;
== How to open Microsoft Write documents (.wri) ==&lt;br /&gt;
[[LibreOffice]] 5.1 and newer releases can open most versions of Microsoft Write documents (.wri). After opening a Write document it can then be saved in OpenDocument Format which is the default file format for LibreOffice or saved in another file format. LibreOffice is able to open most Microsoft Write documents by use of an import filter called libwps, this  import filter is included with LibreOffice by default.&amp;lt;ref&amp;gt;{{Cite web |date=2016-02-07 |title=LibreOffice 5.1: Release Notes |url=https://wiki.documentfoundation.org/ReleaseNotes/5.1 |access-date=2024-12-31 |website=The Document Foundation}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{Cite web |title=Microsoft Write |url=http://justsolve.archiveteam.org/wiki/Microsoft_Write |access-date=2024-12-31 |website=justsolve}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{Cite web |title=Page Redirection |url=http://libwps.sourceforge.net/ |url-status=live |archive-url=https://web.archive.org/web/20160719131221/http://libwps.sourceforge.net/ |archive-date=19 July 2016 |access-date=21 July 2016}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Microsoft applications cannot open Microsoft Write documents. Microsoft stopped shipping Write when Windows 95 was introduced in {{Start date|1995}}. Write was replaced with Microsoft [[WordPad]] which a few years later stopped supporting all the Write document (.wri) formats; this occurred when Windows XP Service Pack 2 shipped in {{Start date|2004}}.&amp;lt;ref&amp;gt;{{Cite web |date=2021-12-15 |title=Opening Windows Write (.WRI) files on modern versions of Windows with CWordpad |url=https://www.toughdev.com/content/2021/12/opening-windows-write-wri-files-on-modern-versions-of-windows/ |access-date=2024-12-31 |website=ToughDev}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See Also ==&lt;br /&gt;
* [[List of word processor programs]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
{{Reflist}}&lt;br /&gt;
&lt;br /&gt;
{{Windows Components}}&lt;br /&gt;
{{Word processors}}&lt;br /&gt;
{{Authority control}}&lt;br /&gt;
&lt;br /&gt;
[[Category:1985 software]]&lt;br /&gt;
[[Category:Atari ST software]]&lt;br /&gt;
[[Category:Discontinued Windows components]]&lt;br /&gt;
[[Category:Windows components|Write]]&lt;br /&gt;
[[Category:Windows word processors]]&lt;/div&gt;</summary>
		<author><name>176.61.2.76</name></author>
	</entry>
	<entry>
		<id>http://debianws.lexgopc.com/wiki143/index.php?title=Gram_matrix&amp;diff=584523</id>
		<title>Gram matrix</title>
		<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Gram_matrix&amp;diff=584523"/>
		<updated>2025-04-18T12:03:33Z</updated>

		<summary type="html">&lt;p&gt;176.61.2.76: /* Gram determinant */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{short description|Matrix of inner products of a set of vectors}}&lt;br /&gt;
In [[linear algebra]], the &#039;&#039;&#039;Gram matrix&#039;&#039;&#039; (or &#039;&#039;&#039;Gramian matrix&#039;&#039;&#039;, &#039;&#039;&#039;Gramian&#039;&#039;&#039;) of a set of vectors &amp;lt;math&amp;gt;v_1,\dots, v_n&amp;lt;/math&amp;gt; in an [[inner product space]] is the [[Hermitian matrix]] of [[inner product]]s, whose entries are given by the [[inner product]] &amp;lt;math&amp;gt;G_{ij} = \left\langle v_i, v_j \right\rangle&amp;lt;/math&amp;gt;.&amp;lt;ref name=&amp;quot;HJ-7.2.10&amp;quot;&amp;gt;{{harvnb|Horn|Johnson|2013|p=441}}, p.441, Theorem 7.2.10&amp;lt;/ref&amp;gt; If the vectors &amp;lt;math&amp;gt;v_1,\dots, v_n&amp;lt;/math&amp;gt; are the columns of matrix &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; then the Gram matrix is &amp;lt;math&amp;gt;X^\dagger X&amp;lt;/math&amp;gt; in the general case that the vector coordinates are complex numbers, which simplifies to &amp;lt;math&amp;gt;X^\top X&amp;lt;/math&amp;gt; for the case that the vector coordinates are real numbers.&lt;br /&gt;
&lt;br /&gt;
An important application is to compute [[linear independence]]: a set of vectors are linearly independent if and only if the [[#Gram determinant|Gram determinant]] (the [[determinant]] of the Gram matrix) is non-zero.&lt;br /&gt;
&lt;br /&gt;
It is named after [[Jørgen Pedersen Gram]].&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
For finite-dimensional real vectors in &amp;lt;math&amp;gt;\mathbb{R}^n&amp;lt;/math&amp;gt; with the usual Euclidean [[dot product]], the Gram matrix is &amp;lt;math&amp;gt;G = V^\top V&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; is a matrix whose columns are the vectors &amp;lt;math&amp;gt;v_k&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;V^\top&amp;lt;/math&amp;gt; is its [[transpose]] whose rows are the vectors &amp;lt;math&amp;gt;v_k^\top&amp;lt;/math&amp;gt;. For [[complex number|complex]] vectors in &amp;lt;math&amp;gt;\mathbb{C}^n&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;G = V^\dagger V&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;V^\dagger&amp;lt;/math&amp;gt; is the [[conjugate transpose]] of &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Given [[square-integrable function]]s &amp;lt;math&amp;gt;\{\ell_i(\cdot),\, i = 1,\dots,n\}&amp;lt;/math&amp;gt; on the interval &amp;lt;math&amp;gt;\left[t_0, t_f\right]&amp;lt;/math&amp;gt;, the Gram matrix &amp;lt;math&amp;gt;G = \left[G_{ij}\right]&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;math&amp;gt;G_{ij} = \int_{t_0}^{t_f} \ell_i^*(\tau)\ell_j(\tau)\, d\tau. &amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\ell_i^*(\tau)&amp;lt;/math&amp;gt; is the [[complex conjugate]] of &amp;lt;math&amp;gt;\ell_i(\tau)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
For any [[bilinear form]] &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; on a [[finite-dimensional]] [[vector space]] over any [[Field (mathematics)|field]] we can define a Gram matrix &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; attached to a set of vectors &amp;lt;math&amp;gt;v_1, \dots, v_n&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;G_{ij} = B\left(v_i, v_j\right)&amp;lt;/math&amp;gt;. The matrix will be symmetric if the bilinear form &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is symmetric.&lt;br /&gt;
&lt;br /&gt;
===Applications===&lt;br /&gt;
* In [[Riemannian geometry]], given an embedded &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;-dimensional [[Riemannian manifold]] &amp;lt;math&amp;gt;M\subset \mathbb{R}^n&amp;lt;/math&amp;gt; and a parametrization &amp;lt;math&amp;gt;\phi: U\to M&amp;lt;/math&amp;gt; for {{nowrap|&amp;lt;math&amp;gt;(x_1, \ldots, x_k)\in U\subset\mathbb{R}^k&amp;lt;/math&amp;gt;,}} the volume form &amp;lt;math&amp;gt;\omega&amp;lt;/math&amp;gt; on &amp;lt;math&amp;gt;M&amp;lt;/math&amp;gt; induced by the embedding may be computed using the Gramian of the coordinate tangent vectors: &amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;\omega = \sqrt{\det G}\ dx_1 \cdots dx_k,\quad G = \left[\left\langle \frac{\partial\phi}{\partial x_i},\frac{\partial\phi}{\partial x_j}\right\rangle\right].&amp;lt;/math&amp;gt; This generalizes the classical surface integral of a parametrized surface &amp;lt;math&amp;gt;\phi:U\to S\subset \mathbb{R}^3&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;(x, y)\in U\subset\mathbb{R}^2&amp;lt;/math&amp;gt;: &amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;\int_S f\ dA = \iint_U f(\phi(x, y))\, \left|\frac{\partial\phi}{\partial x}\,{\times}\,\frac{\partial\phi}{\partial y}\right|\, dx\, dy.&amp;lt;/math&amp;gt;&lt;br /&gt;
* If the vectors are centered [[random variable]]s, the Gramian is approximately proportional to the &#039;&#039;&#039;[[covariance matrix]]&#039;&#039;&#039;, with the scaling determined by the number of elements in the vector.&lt;br /&gt;
* In [[quantum chemistry]], the Gram matrix of a set of [[basis vectors]] is the &#039;&#039;&#039;[[overlap matrix]]&#039;&#039;&#039;.&lt;br /&gt;
* In [[control theory]] (or more generally [[systems theory]]), the &#039;&#039;&#039;[[controllability Gramian]]&#039;&#039;&#039; and &#039;&#039;&#039;[[observability Gramian]]&#039;&#039;&#039; determine properties of a linear system.&lt;br /&gt;
* Gramian matrices arise in covariance structure model fitting (see e.g., Jamshidian and Bentler, 1993, Applied Psychological Measurement, Volume 18, pp.&amp;amp;nbsp;79–94).&lt;br /&gt;
* In the [[finite element method]], the Gram matrix arises from approximating a function from a finite dimensional space; the Gram matrix entries are then the inner products of the basis functions of the finite dimensional subspace.&lt;br /&gt;
* In [[machine learning]], [[kernel function]]s are often represented as Gram matrices.&amp;lt;ref&amp;gt;{{cite journal |last1=Lanckriet |first1=G. R. G. |first2=N. |last2=Cristianini |first3=P. |last3=Bartlett |first4=L. E. |last4=Ghaoui |first5=M. I. |last5=Jordan |title=Learning the kernel matrix with semidefinite programming |journal=Journal of Machine Learning Research |volume=5 |year=2004 |pages=27–72 [p. 29] |url=https://dl.acm.org/citation.cfm?id=894170 }}&amp;lt;/ref&amp;gt; (Also see [[kernel principal component analysis|kernel PCA]])&lt;br /&gt;
* Since the Gram matrix over the reals is a [[symmetric matrix]], it is [[diagonalizable]] and its [[eigenvalues]] are non-negative. The diagonalization of the Gram matrix is the [[singular value decomposition]].&lt;br /&gt;
&lt;br /&gt;
==Properties==&lt;br /&gt;
===Positive-semidefiniteness===&lt;br /&gt;
The Gram matrix is [[symmetric matrix|symmetric]] in the case the inner product is real-valued; it is [[Hermitian matrix|Hermitian]] in the general, complex case by definition of an [[inner product]].&lt;br /&gt;
&lt;br /&gt;
The Gram matrix is [[Positive-semidefinite matrix|positive semidefinite]], and every positive semidefinite matrix is the Gramian matrix for some set of vectors. The fact that the Gramian matrix is positive-semidefinite can be seen from the following simple derivation:&lt;br /&gt;
: &amp;lt;math&amp;gt;&lt;br /&gt;
  x^\dagger \mathbf{G} x =&lt;br /&gt;
  \sum_{i,j}x_i^* x_j\left\langle v_i, v_j \right\rangle =&lt;br /&gt;
  \sum_{i,j}\left\langle x_i v_i, x_j v_j \right\rangle =&lt;br /&gt;
  \biggl\langle \sum_i x_i v_i, \sum_j x_j v_j \biggr\rangle =&lt;br /&gt;
  \biggl\| \sum_i x_i v_i \biggr\|^2 \geq 0 .&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first equality follows from the definition of matrix multiplication, the second and third from the bi-linearity of the [[inner-product]], and the last from the positive definiteness of the inner product.&lt;br /&gt;
Note that this also shows that the Gramian matrix is positive definite if and only if the vectors &amp;lt;math&amp;gt; v_i &amp;lt;/math&amp;gt; are linearly independent (that is, &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;\sum_i x_i v_i \neq 0&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt;).&amp;lt;ref name=&amp;quot;HJ-7.2.10&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Finding a vector realization===&lt;br /&gt;
{{See also|Positive definite matrix#Decomposition}}&lt;br /&gt;
Given any positive semidefinite matrix &amp;lt;math&amp;gt;M&amp;lt;/math&amp;gt;, one can decompose it as:&lt;br /&gt;
: &amp;lt;math&amp;gt;M = B^\dagger B&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;B^\dagger&amp;lt;/math&amp;gt; is the [[conjugate transpose]] of &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; (or &amp;lt;math&amp;gt;M = B^\textsf{T} B&amp;lt;/math&amp;gt; in the real case).&lt;br /&gt;
&lt;br /&gt;
Here &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is a &amp;lt;math&amp;gt;k \times n&amp;lt;/math&amp;gt; matrix, where &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; is the [[matrix rank|rank]] of &amp;lt;math&amp;gt;M&amp;lt;/math&amp;gt;. Various ways to obtain such a decomposition include computing the [[Cholesky decomposition]] or taking the [[square root of a matrix|non-negative square root]] of &amp;lt;math&amp;gt;M&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The columns &amp;lt;math&amp;gt;b^{(1)}, \dots, b^{(n)}&amp;lt;/math&amp;gt; of &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; can be seen as &#039;&#039;n&#039;&#039; vectors in &amp;lt;math&amp;gt;\mathbb{C}^k&amp;lt;/math&amp;gt; (or &#039;&#039;k&#039;&#039;-dimensional Euclidean space &amp;lt;math&amp;gt;\mathbb{R}^k&amp;lt;/math&amp;gt;, in the real case). Then&lt;br /&gt;
: &amp;lt;math&amp;gt;M_{ij} = b^{(i)} \cdot b^{(j)}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the [[dot product]] &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;a \cdot b = \sum_{\ell=1}^k a_\ell^* b_\ell&amp;lt;/math&amp;gt; is the usual inner product on &amp;lt;math&amp;gt;\mathbb{C}^k&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thus a [[Hermitian matrix]] &amp;lt;math&amp;gt;M&amp;lt;/math&amp;gt; is positive semidefinite if and only if it is the Gram matrix of some vectors &amp;lt;math&amp;gt;b^{(1)}, \dots, b^{(n)}&amp;lt;/math&amp;gt;. Such vectors are called a &#039;&#039;&#039;vector realization&#039;&#039;&#039; of {{nowrap|&amp;lt;math&amp;gt;M&amp;lt;/math&amp;gt;.}} The infinite-dimensional analog of this statement is [[Mercer&#039;s theorem]].&lt;br /&gt;
&lt;br /&gt;
===Uniqueness of vector realizations===&lt;br /&gt;
If &amp;lt;math&amp;gt;M&amp;lt;/math&amp;gt; is the Gram matrix of vectors &amp;lt;math&amp;gt;v_1,\dots,v_n&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;\mathbb{R}^k&amp;lt;/math&amp;gt; then applying any rotation or reflection of &amp;lt;math&amp;gt;\mathbb{R}^k&amp;lt;/math&amp;gt; (any [[orthogonal transformation]], that is, any [[Euclidean isometry]] preserving 0) to the sequence of vectors results in the same Gram matrix. That is, for any &amp;lt;math&amp;gt;k \times k&amp;lt;/math&amp;gt; [[orthogonal matrix]] &amp;lt;math&amp;gt;Q&amp;lt;/math&amp;gt;, the Gram matrix of &amp;lt;math&amp;gt;Q v_1,\dots, Q v_n&amp;lt;/math&amp;gt; is also {{nowrap|&amp;lt;math&amp;gt;M&amp;lt;/math&amp;gt;.}}&lt;br /&gt;
&lt;br /&gt;
This is the only way in which two real vector realizations of &amp;lt;math&amp;gt;M&amp;lt;/math&amp;gt; can differ: the vectors &amp;lt;math&amp;gt;v_1,\dots,v_n&amp;lt;/math&amp;gt; are unique up to [[orthogonal transformation]]s. In other words, the dot products &amp;lt;math&amp;gt;v_i \cdot v_j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w_i \cdot w_j&amp;lt;/math&amp;gt; are equal if and only if some rigid transformation of &amp;lt;math&amp;gt;\mathbb{R}^k&amp;lt;/math&amp;gt; transforms the vectors &amp;lt;math&amp;gt;v_1,\dots,v_n&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;w_1, \dots, w_n&amp;lt;/math&amp;gt; and 0 to 0.&lt;br /&gt;
&lt;br /&gt;
The same holds in the complex case, with [[unitary transformation]]s in place of orthogonal ones.&lt;br /&gt;
That is, if the Gram matrix of vectors &amp;lt;math&amp;gt;v_1, \dots, v_n&amp;lt;/math&amp;gt; is equal to the Gram matrix of vectors &amp;lt;math&amp;gt;w_1, \dots, w_n&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;\mathbb{C}^k&amp;lt;/math&amp;gt; then there is a [[unitary matrix|unitary]] &amp;lt;math&amp;gt;k \times k&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt; (meaning &amp;lt;math&amp;gt;U^\dagger U = I&amp;lt;/math&amp;gt;) such that &amp;lt;math&amp;gt;v_i = U w_i&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;i = 1, \dots, n&amp;lt;/math&amp;gt;.&amp;lt;ref&amp;gt;{{harvtxt|Horn|Johnson|2013}}, p. 452, Theorem 7.3.11&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other properties===&lt;br /&gt;
* Because &amp;lt;math&amp;gt;G = G^\dagger&amp;lt;/math&amp;gt;, it is necessarily the case that &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;G^\dagger&amp;lt;/math&amp;gt; commute.  That is, a real or complex Gram matrix &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is also a [[normal matrix]].&lt;br /&gt;
* The Gram matrix of any [[orthonormal basis]] is the identity matrix.  Equivalently, the Gram matrix of the rows or the columns of a real [[rotation matrix]] is the identity matrix.  Likewise, the Gram matrix of the rows or columns of a [[unitary matrix]] is the identity matrix.&lt;br /&gt;
* The rank of the Gram matrix of vectors in &amp;lt;math&amp;gt;\mathbb{R}^k&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\mathbb{C}^k&amp;lt;/math&amp;gt; equals the dimension of the space [[Linear span|spanned]] by these vectors.&amp;lt;ref name=&amp;quot;HJ-7.2.10&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Gram determinant==&lt;br /&gt;
The &#039;&#039;&#039;Gram determinant&#039;&#039;&#039; or &#039;&#039;&#039;Gramian&#039;&#039;&#039; is the determinant of the Gram matrix:&lt;br /&gt;
&amp;lt;math display=block&amp;gt;\bigl|G(v_1, \dots, v_n)\bigr| = \begin{vmatrix}&lt;br /&gt;
  \langle v_1,v_1\rangle &amp;amp; \langle v_1,v_2\rangle &amp;amp;\dots &amp;amp; \langle v_1,v_n\rangle \\&lt;br /&gt;
  \langle v_2,v_1\rangle &amp;amp; \langle v_2,v_2\rangle &amp;amp;\dots &amp;amp; \langle v_2,v_n\rangle \\&lt;br /&gt;
  \vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots \\&lt;br /&gt;
  \langle v_n,v_1\rangle &amp;amp; \langle v_n,v_2\rangle &amp;amp;\dots &amp;amp; \langle v_n,v_n\rangle&lt;br /&gt;
\end{vmatrix}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;v_1, \dots, v_n&amp;lt;/math&amp;gt; are vectors in &amp;lt;math&amp;gt;\mathbb{R}^m&amp;lt;/math&amp;gt; then it is the square of the &#039;&#039;n&#039;&#039;-dimensional volume of the [[Parallelepiped#Parallelotope|parallelotope]] formed by the vectors. In particular, the vectors are [[Linear independence|linearly independent]] [[if and only if]] the parallelotope has nonzero &#039;&#039;n&#039;&#039;-dimensional volume, if and only if Gram determinant is nonzero, if and only if the Gram matrix is [[Non-singular matrix|nonsingular]]. When {{nowrap|&#039;&#039;n&#039;&#039; &amp;gt; &#039;&#039;m&#039;&#039;}} the determinant and volume are zero.  When {{nowrap|1=&#039;&#039;n&#039;&#039; = &#039;&#039;m&#039;&#039;}}, this reduces to the standard theorem that the absolute value of the determinant of &#039;&#039;n&#039;&#039; &#039;&#039;n&#039;&#039;-dimensional vectors is the &#039;&#039;n&#039;&#039;-dimensional volume. The volume of the [[simplex]] formed by the vectors is {{math|Volume(parallelotope) / &#039;&#039;n&#039;&#039;!}}.&lt;br /&gt;
&lt;br /&gt;
When &amp;lt;math&amp;gt;v_1, \dots, v_n&amp;lt;/math&amp;gt; are linearly independent, the distance between a point &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; and the linear span of &amp;lt;math&amp;gt;v_1, \dots, v_n&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;\sqrt{\frac{|G(x,v_1, \dots, v_n)|}{|G(v_1, \dots, v_n)|}}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Consider the moment problem: given &amp;lt;math&amp;gt;c_1, \dots, c_n \in \mathbb C&amp;lt;/math&amp;gt;, find a vector &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; such that &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;\left\langle v, v_i\right\rangle=c_i&amp;lt;/math&amp;gt;, for all &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;1 \leqslant i \leqslant n&amp;lt;/math&amp;gt;. There exists a unique solution with minimal norm:&amp;lt;ref&amp;gt;{{Cite journal |last1=Ramon |first1=Garcia, Stephan |last2=Javad |first2=Mashreghi |last3=T. |first3=Ross, William |date=2023-01-30 |title=Operator Theory by Example |url=https://academic.oup.com/book/45766 |journal=OUP Academic |language=en |doi=10.1093/o|doi-broken-date=13 April 2025 }}&amp;lt;/ref&amp;gt;{{Pg|page=38}}&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;v=-\frac{1}{G\left(v_1, v_2, \ldots, v_n\right)} \det&lt;br /&gt;
\begin{bmatrix}&lt;br /&gt;
0 &amp;amp; c_1 &amp;amp; c_2 &amp;amp; \cdots &amp;amp; c_n \\&lt;br /&gt;
v_1 &amp;amp; \left\langle v_1, v_1\right\rangle &amp;amp; \left\langle v_1, v_2\right\rangle &amp;amp; \cdots &amp;amp; \left\langle v_1, v_n\right\rangle \\&lt;br /&gt;
v_2 &amp;amp; \left\langle v_2, v_1\right\rangle &amp;amp; \left\langle v_2, v_2\right\rangle &amp;amp; \cdots &amp;amp; \left\langle v_2, v_n\right\rangle \\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots \\&lt;br /&gt;
v_n &amp;amp; \left\langle v_n, v_1\right\rangle &amp;amp; \left\langle v_n, v_2\right\rangle &amp;amp; \cdots &amp;amp; \left\langle v_n, v_n\right\rangle&lt;br /&gt;
\end{bmatrix}&amp;lt;/math&amp;gt;The Gram determinant can also be expressed in terms of the [[exterior product]] of vectors by&lt;br /&gt;
:&amp;lt;math&amp;gt;\bigl|G(v_1, \dots, v_n)\bigr| = \| v_1 \wedge \cdots \wedge v_n\|^2.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Gram determinant therefore supplies an [[exterior product#inner product|inner product]] for the space {{tmath|{\textstyle\bigwedge}^{\!n}(V)}}. If an [[orthonormal basis]] &#039;&#039;e&#039;&#039;&amp;lt;sub&amp;gt;&#039;&#039;i&#039;&#039;&amp;lt;/sub&amp;gt;, {{nowrap|1=&#039;&#039;i&#039;&#039; = 1, 2, ..., &#039;&#039;n&#039;&#039;}} on {{tmath|V}} is given, the vectors &lt;br /&gt;
: &amp;lt;math&amp;gt; e_{i_1} \wedge \cdots \wedge e_{i_n},\quad i_1 &amp;lt; \cdots &amp;lt; i_n, &amp;lt;/math&amp;gt;&lt;br /&gt;
will constitute an orthonormal basis of &#039;&#039;n&#039;&#039;-dimensional volumes on the space {{tmath|{\textstyle\bigwedge}^{\!n}(V)}}. Then the Gram determinant &amp;lt;math&amp;gt;\bigl|G(v_1, \dots, v_n)\bigr|&amp;lt;/math&amp;gt; amounts to an &#039;&#039;n&#039;&#039;-dimensional [[Pythagorean Theorem#Sets_of_m-dimensional_objects_in_n-dimensional_space|Pythagorean Theorem]] for the volume of the parallelotope formed by the vectors &amp;lt;math&amp;gt;v_1 \wedge \cdots \wedge v_n&amp;lt;/math&amp;gt; in terms of its projections onto the basis volumes &amp;lt;math&amp;gt;e_{i_1} \wedge \cdots \wedge e_{i_n}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
When the vectors &amp;lt;math&amp;gt;v_1, \ldots, v_n \in \mathbb{R}^m&amp;lt;/math&amp;gt; are defined from the positions of points &amp;lt;math&amp;gt;p_1, \ldots, p_n&amp;lt;/math&amp;gt; relative to some reference point &amp;lt;math&amp;gt;p_{n+1}&amp;lt;/math&amp;gt;,&lt;br /&gt;
:&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;(v_1, v_2, \ldots, v_n) = (p_1 - p_{n+1}, p_2 - p_{n+1}, \ldots, p_n - p_{n+1})\,,&amp;lt;/math&amp;gt;&lt;br /&gt;
then the Gram determinant can be written as the difference of two Gram determinants,&lt;br /&gt;
:&amp;lt;math display=block&amp;gt;&lt;br /&gt;
\bigl|G(v_1, \dots, v_n)\bigr| = \bigl|G((p_1, 1), \dots, (p_{n+1}, 1))\bigr| - \bigl|G(p_1, \dots, p_{n+1})\bigr|\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where each &amp;lt;math&amp;gt;(p_j, 1)&amp;lt;/math&amp;gt; is the corresponding point &amp;lt;math&amp;gt;p_j&amp;lt;/math&amp;gt; supplemented with the coordinate value of 1 for an &amp;lt;math&amp;gt;(m+1)&amp;lt;/math&amp;gt;-st dimension.{{Citation needed|reason=This relation between Gram matrices is apparently true but needs a citation to support its [[WP:N|notability]].|date=February 2022}} Note that in the common case that {{math|1=&#039;&#039;n&#039;&#039; = &#039;&#039;m&#039;&#039;}}, the second term on the right-hand side will be zero.&lt;br /&gt;
&lt;br /&gt;
==Constructing an orthonormal basis==&lt;br /&gt;
&lt;br /&gt;
Given a set of linearly independent vectors &amp;lt;math&amp;gt;\{v_i\}&amp;lt;/math&amp;gt; with Gram matrix &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; defined by &amp;lt;math&amp;gt;G_{ij}:= \langle v_i,v_j\rangle&amp;lt;/math&amp;gt;, one can construct an orthonormal basis&lt;br /&gt;
:&amp;lt;math&amp;gt;u_i := \sum_j \bigl(G^{-1/2}\bigr)_{ji} v_j.&amp;lt;/math&amp;gt;&lt;br /&gt;
In matrix notation, &amp;lt;math&amp;gt;U = V G^{-1/2} &amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt; has orthonormal basis vectors &amp;lt;math&amp;gt;\{u_i\}&amp;lt;/math&amp;gt; and the matrix &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; is composed of the given column vectors &amp;lt;math&amp;gt;\{v_i\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The matrix &amp;lt;math&amp;gt;G^{-1/2}&amp;lt;/math&amp;gt; is guaranteed to exist. Indeed, &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is Hermitian, and so can be decomposed as &amp;lt;math&amp;gt;G=UDU^\dagger&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt; a unitary matrix and &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; a real diagonal matrix. Additionally, the &amp;lt;math&amp;gt;v_i&amp;lt;/math&amp;gt; are linearly independent if and only if &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is positive definite, which implies that the diagonal entries of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; are positive. &amp;lt;math&amp;gt;G^{-1/2}&amp;lt;/math&amp;gt; is therefore uniquely defined by &amp;lt;math&amp;gt;G^{-1/2}:=UD^{-1/2}U^\dagger&amp;lt;/math&amp;gt;. One can check that these new vectors are orthonormal:&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\langle u_i,u_j \rangle&lt;br /&gt;
&amp;amp;= \sum_{i&#039;} \sum_{j&#039;} \Bigl\langle \bigl(G^{-1/2}\bigr)_{i&#039;i} v_{i&#039;},\bigl(G^{-1/2}\bigr)_{j&#039;j} v_{j&#039;} \Bigr\rangle \\[10mu]&lt;br /&gt;
&amp;amp;= \sum_{i&#039;} \sum_{j&#039;} \bigl(G^{-1/2}\bigr)_{ii&#039;} G_{i&#039;j&#039;} \bigl(G^{-1/2}\bigr)_{j&#039;j} \\[8mu]&lt;br /&gt;
&amp;amp;=  \bigl(G^{-1/2} G G^{-1/2}\bigr)_{ij} = \delta_{ij}&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
where we used &amp;lt;math&amp;gt;\bigl(G^{-1/2}\bigr)^\dagger=G^{-1/2} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==See also==&lt;br /&gt;
* [[Controllability Gramian]]&lt;br /&gt;
* [[Observability Gramian]]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
{{reflist}}&lt;br /&gt;
* {{Cite book | last1=Horn | first1=Roger A. | last2=Johnson | first2=Charles R. | title=Matrix Analysis | publisher=[[Cambridge University Press]] | isbn=978-0-521-54823-6 | year=2013 |edition=2nd }}&lt;br /&gt;
&lt;br /&gt;
==External links==&lt;br /&gt;
* {{springer|title=Gram matrix|id=p/g044750}}&lt;br /&gt;
* &#039;&#039;[https://mathweb.rice.edu/honors-calculus Volumes of parallelograms]&#039;&#039; by Frank Jones&lt;br /&gt;
&lt;br /&gt;
{{Matrix classes}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Systems theory]]&lt;br /&gt;
[[Category:Matrices (mathematics)]]&lt;br /&gt;
[[Category:Determinants]]&lt;br /&gt;
[[Category:Analytic geometry]]&lt;br /&gt;
[[Category:Kernel methods for machine learning]]&lt;br /&gt;
&lt;br /&gt;
[[fr:Matrice de Gram]]&lt;/div&gt;</summary>
		<author><name>176.61.2.76</name></author>
	</entry>
	<entry>
		<id>http://debianws.lexgopc.com/wiki143/index.php?title=Bitstream_Speedo_Fonts&amp;diff=6377441</id>
		<title>Bitstream Speedo Fonts</title>
		<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Bitstream_Speedo_Fonts&amp;diff=6377441"/>
		<updated>2022-12-31T10:37:23Z</updated>

		<summary type="html">&lt;p&gt;176.61.2.76: /* External links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Bitstream Speedo&#039;&#039;&#039;, or &#039;&#039;&#039;Speedo&#039;&#039;&#039;, is an obsolete [[scalable font]] format created by [[Bitstream Inc]].&amp;lt;ref name=&amp;quot;def&amp;quot;&amp;gt;[https://www.pcmag.com/encyclopedia_term/0,2542,t=Speedo&amp;amp;i=51849,00.asp PCmag Speedo Definition]&amp;lt;/ref&amp;gt; Speedo was used on [[Atari ST]], [[Atari Falcon|Falcon]], in the [[XyWrite]] [[word processor]], and in very early versions of [[WordPerfect]] and [[Microsoft Windows]].&amp;lt;ref name=&amp;quot;support&amp;quot;&amp;gt;{{cite web|title=Support: Bitstream Speedo Fonts|url=http://www.bitstream.com:80/fonts/support/speedo_fonts.html|archiveurl=https://web.archive.org/web/20091108153822/http://www.bitstream.com/fonts/support/speedo_fonts.html|archivedate=8 November 2009|url-status=dead}}&amp;lt;/ref&amp;gt; Speedo was replaced by more popular font types like [[Type 1 font]]s.  Before Speedo was considered obsolete it was supported by the [[X Window System]] through the speedo [[Modular programming|module]], and most [[Linux distribution]]s had a Speedo font package, but support was removed from X in the X11R7.0 release in 2005.&amp;lt;ref name=&amp;quot;x window&amp;quot;&amp;gt;[http://www.x.org/wiki/DeprecatedInX11R7/ X.Org: Deprecated in X11R7&amp;lt;!-- Bot generated title --&amp;gt;]&amp;lt;/ref&amp;gt; Speedo fonts have the [[file extension]] &#039;&#039;.spd&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==External links==&lt;br /&gt;
*[https://web.archive.org/web/20160513194249/http://www.netsweng.com/~anderson/talks/xclass/fonts.html X Logical Font Description]&lt;br /&gt;
*[https://web.archive.org/web/20120330123919/http://www.pcmag.com/encyclopedia_term/0,2542,t=FaceLift&amp;amp;i=42970,00.asp Bitstream FaceLift]&lt;br /&gt;
&lt;br /&gt;
[[Category:Font formats]]&lt;br /&gt;
[[Category:Typesetting]]&lt;br /&gt;
[[Category:Digital typography]]&lt;br /&gt;
&lt;br /&gt;
{{typography-stub}}&lt;/div&gt;</summary>
		<author><name>176.61.2.76</name></author>
	</entry>
	<entry>
		<id>http://debianws.lexgopc.com/wiki143/index.php?title=Kato%27s_conjecture&amp;diff=2318648</id>
		<title>Kato&#039;s conjecture</title>
		<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Kato%27s_conjecture&amp;diff=2318648"/>
		<updated>2022-11-18T14:39:16Z</updated>

		<summary type="html">&lt;p&gt;176.61.2.76: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Kato&#039;s conjecture&#039;&#039;&#039; is a [[mathematical problem]] named after [[mathematician]] [[Tosio Kato]], of the [[University of California, Berkeley]]. Kato initially posed the problem in 1953.&amp;lt;ref&amp;gt;{{cite journal | first=Tosio | last=Kato | title=Integration of the equation of evolution in a Banach space | journal= J. Math. Soc. Jpn. | volume=5 | year=1953 | issue=2 | pages=208–234 | mr=0058861 | doi=10.2969/jmsj/00520208| doi-access=free }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Kato asked whether the square roots of certain [[elliptic operator]]s, defined via [[functional calculus]], are [[analytic function|analytic]]. The full statement of the conjecture as given by Auscher &#039;&#039;et al.&#039;&#039; is: &amp;quot;the domain of the square root of a uniformly complex elliptic operator &amp;lt;math&amp;gt;L =-\mathrm{div} (A\nabla)&amp;lt;/math&amp;gt; with bounded measurable coefficients in &#039;&#039;&#039;R&#039;&#039;&#039;&amp;lt;sup&amp;gt;n&amp;lt;/sup&amp;gt; is the Sobolev space &#039;&#039;H&#039;&#039;&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt;(&#039;&#039;&#039;R&#039;&#039;&#039;&amp;lt;sup&amp;gt;n&amp;lt;/sup&amp;gt;) in any dimension with the estimate &amp;lt;math&amp;gt;||\sqrt{L}f||_{2} \sim ||\nabla f||_{2}&amp;lt;/math&amp;gt;&amp;quot;.&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The problem remained unresolved for nearly a half-century, until in 2001 it was jointly solved in the affirmative by [[Pascal Auscher]], [[Steve Hofmann]], [[Michael Lacey (mathematician)|Michael Lacey]], [[Alan Gaius Ramsay McIntosh|Alan McIntosh]], and [[Philippe Tchamitchian]].&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;{{cite journal | first1=Pascal | last1=Auscher | first2=Steve | last2=Hofmann | first3=Michael | last3=Lacey | first4=Alan | last4=McIntosh | first5=Philippe | last5=Tchamitchian | title=The solution of the Kato square root problem for second order elliptic operators on &#039;&#039;&#039;R&#039;&#039;&#039;&amp;lt;sup&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;/sup&amp;gt; | journal=[[Annals of Mathematics]] | volume=156 | issue=2 | pages=633&amp;amp;ndash;654 | year=2002 | doi=10.2307/3597201 | jstor=3597201 | mr=1933726}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
{{reflist}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Differential operators]]&lt;br /&gt;
[[Category:Operator theory]]&lt;br /&gt;
[[Category:Conjectures that have been proved]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{mathanalysis-stub}}&lt;/div&gt;</summary>
		<author><name>176.61.2.76</name></author>
	</entry>
</feed>