List of logarithmic identities: Difference between revisions
imported>Hellacioussatyr |
imported>EmptySora →External links: (dummy edit) Like, the whole point of not having those blank lines is so you can still have a distinction between individual paragraphs—so having multiple equations doesn’t make it as hard to tell where one paragraph ends and the next starts. Semantically, without the blank lines is more correct. Visually, with them is more readable on mobile web. Ugh. |
||
| Line 5: | Line 5: | ||
''Trivial'' mathematical identities are relatively simple (for an experienced mathematician), though not necessarily unimportant. The trivial logarithmic identities are as follows: | ''Trivial'' mathematical identities are relatively simple (for an experienced mathematician), though not necessarily unimportant. The trivial logarithmic identities are as follows: | ||
{| cellpadding=3 | |||
| if <math>\log_b(1) = 0 </math>|| then || <math> b^0 = 1</math> | |||
| if <math>\log_b(1) = 0 </math>|| | |||
|- | |- | ||
| if <math>\log_b(b) = 1 </math>|| then || <math> b^1 = b</math> | | if <math>\log_b(b) = 1 </math>|| then || <math> b^1 = b</math> | ||
| Line 14: | Line 13: | ||
=== Explanations === | === Explanations === | ||
By definition, we know that: | By definition, we know that: | ||
<math display="block">\log_b(y) = x \iff b^x = y,</math> | |||
where <math>b \neq 0 </math> and <math>b \neq 1</math>. | where <math>b \neq 0 </math> and <math>b \neq 1</math>. | ||
<!-- PROPERTY 1 --> | <!-- PROPERTY 1 --> | ||
Setting <math>x = 0</math>, | Setting <math>x = 0</math>, | ||
we can see that: | we can see that: | ||
<math> | |||
<math display="block"> | |||
b^x = y | b^x = y | ||
\iff b^{(0)} = y | \iff b^{(0)} = y | ||
\iff 1 = y | \iff 1 = y | ||
\iff y = 1 | \iff y = 1 | ||
</math> | </math> | ||
<math> | |||
So, substituting these values into the formula, we see that: | |||
<math display="block"> | |||
\log_b (y) = x | \log_b (y) = x | ||
\iff \log_b (1) = 0 | \iff \log_b (1) = 0, | ||
</math> | </math> | ||
which gets us the first property. | |||
<!-- PROPERTY 2 --> | <!-- PROPERTY 2 --> | ||
Setting <math>x = 1</math>, | Setting <math>x = 1</math>, we can see that: | ||
we can see that: | |||
<math> | <math display="block"> | ||
b^x = y | b^x = y | ||
\iff b^{(1)} = y | \iff b^{(1)} = y | ||
\iff b = y | \iff b = y | ||
\iff y = b | \iff y = b | ||
</math> | </math> | ||
<math> | |||
So, substituting these values into the formula, we see that: | |||
<math display="block"> | |||
\log_b (y) = x | \log_b (y) = x | ||
\iff \log_b (b) = 1 | \iff \log_b (b) = 1, | ||
</math> | </math> | ||
which gets us the second property. | |||
== Cancelling exponentials == | == Cancelling exponentials == | ||
Logarithms and [[Exponential function|exponentials]] with the same base cancel each other. This is true because logarithms and exponentials are inverse | Logarithms and [[Exponential function|exponentials]] with the same base cancel each other. This is true because logarithms and exponentials are inverse operations{{snd}}much like the same way multiplication and division are inverse operations, and addition and subtraction are inverse operations:<ref>{{Cite web|last=Weisstein|first=Eric W.|title=Logarithm|url=https://mathworld.wolfram.com/Logarithm.html|access-date=2020-08-29|website=mathworld.wolfram.com|language=en}}</ref> | ||
<math display="block">b^{\log_b(x)} = x\text{ because }\mbox{antilog}_b(\log_b(x)) = x</math> | |||
<math display="block">\log_b(b^x) = x\text{ because }\log_b(\mbox{antilog}_b(x)) = x</math> | |||
:<math> | Both of the above are derived from the following two equations that define a logarithm: (note that in this explanation, the variables of <math>x</math> and <math>x</math> may not be referring to the same number) | ||
<math display="block">\log_b (y) = x \iff b^x = y</math> | |||
Looking at the equation <math> b^x = y </math>, and substituting the value for <math>x</math> of | |||
<math> \log_b (y) = x </math>, we get the following equation: | |||
<math display="block"> | |||
b^x = y | b^x = y | ||
\iff b^{\log _b(y)} = y | \iff b^{\log _b(y)} = y | ||
\iff b^{\log_b (y)} = y | \iff b^{\log_b (y)} = y, | ||
</math> | </math> | ||
which gets us the first equation. | |||
Another more rough way to think about it is that <math> b^{\text{something}} = y</math>, | Another more rough way to think about it is that <math> b^{\text{something}} = y</math>, | ||
and that that "<math>\text{something}</math>" is <math> \log_b (y) </math>. | and that that "<math>\text{something}</math>" is <math> \log_b (y) </math>. | ||
Looking at the equation <math> \log_b (y) = x</math> | Looking at the equation <math> \log_b (y) = x</math>, and substituting the value for <math> y </math> of <math>b^x = y</math>, we get the following equation: | ||
, and substituting the value for <math> y </math> of <math>b^x = y</math>, we get the following equation: | |||
<math> | <math display="block"> | ||
\log_b (y) = x | \log_b (y) = x | ||
\iff \log_b(b^x) = x | \iff \log_b(b^x) = x | ||
\iff \log_b(b^x) = x | \iff \log_b(b^x) = x, | ||
</math> | </math> | ||
Another more rough way to think about it is that <math> \log_b (\text{something}) = x</math>, | which gets us the second equation. | ||
and that that something "<math>\text{something}</math>" is <math> b^x</math>. | |||
<! | Another more rough way to think about it is that <math> \log_b (\text{something}) = x</math>, and that that something "<math>\text{something}</math>" is <math> b^x</math>. | ||
<!-- previously what was there: | |||
<math display="block">b^c = x \iff \log_b(x) = c</math> | |||
Substituting {{mvar|c}} in the left equation gives {{math|1=''b''<sup>log<sub>''b''</sub>(''x'')</sup> = ''x''}}, and substituting {{mvar|x}} in the right gives {{math|1=log<sub>''b''</sub>(''b''<sup>''c''</sup>) = ''c''}}. Finally, replace {{mvar|c}} with {{mvar|x}}. | Substituting {{mvar|c}} in the left equation gives {{math|1=''b''<sup>log<sub>''b''</sub>(''x'')</sup> = ''x''}}, and substituting {{mvar|x}} in the right gives {{math|1=log<sub>''b''</sub>(''b''<sup>''c''</sup>) = ''c''}}. Finally, replace {{mvar|c}} with {{mvar|x}}. | ||
| Line 90: | Line 102: | ||
Logarithms can be used to make calculations easier. For example, two numbers can be multiplied just by using a logarithm table and adding. These are often known as logarithmic properties, which are documented in the table below.<ref>{{Cite web|title=4.3 - Properties of Logarithms|url=https://people.richland.edu/james/lecture/m116/logs/properties.html|access-date=2020-08-29|website=people.richland.edu}}</ref> The first three operations below assume that {{math|1=''x'' = ''b''<sup>''c''</sup>}} and/or {{math|1=''y'' = ''b''<sup>''d''</sup>}}, so that {{math|1=log<sub>''b''</sub>(''x'') = ''c''}} and {{math|1=log<sub>''b''</sub>(''y'') = ''d''}}. Derivations also use the log definitions {{math|1=''x'' = ''b''<sup>log<sub>''b''</sub>(''x'')</sup>}} and {{math|1=''x'' = log<sub>''b''</sub>(''b''<sup>''x''</sup>)}}. | Logarithms can be used to make calculations easier. For example, two numbers can be multiplied just by using a logarithm table and adding. These are often known as logarithmic properties, which are documented in the table below.<ref>{{Cite web|title=4.3 - Properties of Logarithms|url=https://people.richland.edu/james/lecture/m116/logs/properties.html|access-date=2020-08-29|website=people.richland.edu}}</ref> The first three operations below assume that {{math|1=''x'' = ''b''<sup>''c''</sup>}} and/or {{math|1=''y'' = ''b''<sup>''d''</sup>}}, so that {{math|1=log<sub>''b''</sub>(''x'') = ''c''}} and {{math|1=log<sub>''b''</sub>(''y'') = ''d''}}. Derivations also use the log definitions {{math|1=''x'' = ''b''<sup>log<sub>''b''</sub>(''x'')</sup>}} and {{math|1=''x'' = log<sub>''b''</sub>(''b''<sup>''x''</sup>)}}. | ||
{| cellpadding=3 | |||
| <math>\log_b(xy)=\log_b(x)+\log_b(y)</math> || because || <math>b^c b^d=b^{c+d}</math> | | <math>\log_b(xy)=\log_b(x)+\log_b(y)</math> || because || <math>b^c b^d=b^{c+d}</math> | ||
|- | |- | ||
| Line 108: | Line 120: | ||
The laws result from canceling exponentials and the appropriate law of indices. Starting with the first law: | The laws result from canceling exponentials and the appropriate law of indices. Starting with the first law: | ||
<math display="block">xy = b^{\log_b(x)} b^{\log_b(y)} = b^{\log_b(x) + \log_b(y)} \Rightarrow \log_b(xy) = \log_b(b^{\log_b(x) + \log_b(y)}) = \log_b(x) + \log_b(y)</math> | |||
The law for powers exploits another of the laws of indices: | The law for powers exploits another of the laws of indices: | ||
<math display="block">x^y = (b^{\log_b(x)})^y = b^{y \log_b(x)} \Rightarrow \log_b(x^y) = y \log_b(x)</math> | |||
The law relating to quotients then follows: | The law relating to quotients then follows: | ||
<math display="block">\log_b \bigg(\frac{x}{y}\bigg) = \log_b(x y^{-1}) = \log_b(x) + \log_b(y^{-1}) = \log_b(x) - \log_b(y)</math> | |||
<math display="block">\log_b \bigg(\frac{1}{y}\bigg) = \log_b(y^{-1}) = - \log_b(y)</math> | |||
Similarly, the root law is derived by rewriting the root as a reciprocal power: | Similarly, the root law is derived by rewriting the root as a reciprocal power: | ||
<math display="block">\log_b(\sqrt[y]x) = \log_b(x^{\frac{1}{y}}) = \frac{1}{y}\log_b(x)</math> | |||
=== Derivations of product, quotient, and power rules === | === Derivations of product, quotient, and power rules === | ||
These are the three main logarithm laws | These are the three main logarithm laws, rules, or principles,<ref> | ||
{{Cite web | {{Cite web | ||
|url=https://courseware.cemc.uwaterloo.ca/8/assignments/210/6 | |url=https://courseware.cemc.uwaterloo.ca/8/assignments/210/6 | ||
| Line 133: | Line 144: | ||
|access-date=2022-04-23 | |access-date=2022-04-23 | ||
}} | }} | ||
</ref> from which the other properties listed above can be proven. Each of these logarithm properties correspond to their respective exponent law, and their derivations | </ref> from which the other properties listed above can be proven. Each of these logarithm properties correspond to their respective exponent law, and their derivations and proofs will hinge on those facts. There are multiple ways to derive or prove each logarithm law{{snd}}this is just one possible method. | ||
==== Logarithm of a product ==== | ==== Logarithm of a product ==== | ||
| Line 139: | Line 150: | ||
To state the ''logarithm of a product'' law formally: | To state the ''logarithm of a product'' law formally: | ||
<math display="block">\forall b \in \mathbb{R}_+, b \neq 1, \forall x, y, \in \mathbb{R}_+, \log_b(xy) = \log_b(x) + \log_b(y)</math> | |||
Derivation: | Derivation: | ||
| Line 148: | Line 159: | ||
Rewriting these as exponentials, we see that | Rewriting these as exponentials, we see that | ||
<math display="block">\begin{align} | |||
m &= \log_b(x) \iff b^m = x, \\ | m &= \log_b(x) \iff b^m = x, \\ | ||
n &= \log_b(y) \iff b^n = y. | n &= \log_b(y) \iff b^n = y. | ||
| Line 155: | Line 166: | ||
From here, we can relate <math>b^m</math> (i.e. <math>x</math>) and <math>b^n</math> (i.e. <math>y</math>) using exponent laws as | From here, we can relate <math>b^m</math> (i.e. <math>x</math>) and <math>b^n</math> (i.e. <math>y</math>) using exponent laws as | ||
<math display="block">xy = (b^m)(b^n) = b^m \cdot b^n = b^{m + n}</math> | |||
To recover the logarithms, we apply <math>\log_b</math> to both sides of the equality. | To recover the logarithms, we apply <math>\log_b</math> to both sides of the equality. | ||
<math display="block">\log_b(xy) = \log_b(b^{m + n})</math> | |||
The right side may be simplified using one of the logarithm properties from before: we know that <math>\log_b(b^{m + n}) = m + n</math>, giving | The right side may be simplified using one of the logarithm properties from before: we know that <math>\log_b(b^{m + n}) = m + n</math>, giving | ||
<math display="block">\log_b(xy) = m + n</math> | |||
We now resubstitute the values for <math>m</math> and <math>n</math> into our equation, so our final expression is only in terms of <math>x</math>, <math>y</math>, and <math>b</math>. | We now resubstitute the values for <math>m</math> and <math>n</math> into our equation, so our final expression is only in terms of <math>x</math>, <math>y</math>, and <math>b</math>. | ||
<math display="block">\log_b(xy) = \log_b(x) + \log_b(y)</math> | |||
This completes the derivation. | This completes the derivation. | ||
| Line 175: | Line 186: | ||
To state the ''logarithm of a quotient'' law formally: | To state the ''logarithm of a quotient'' law formally: | ||
<math display="block">\forall b \in \mathbb{R}_+, b \neq 1, \forall x, y, \in \mathbb{R}_+, \log_b \left( \frac{x}{y} \right) = \log_b(x) - \log_b(y)</math> | |||
Derivation: | Derivation: | ||
| Line 185: | Line 196: | ||
Rewriting these as exponentials, we see that: | Rewriting these as exponentials, we see that: | ||
<math display="block">\begin{align} | |||
m &= \log_b(x) \iff b^m = x, \\ | m &= \log_b(x) \iff b^m = x, \\ | ||
n &= \log_b(y) \iff b^n = y. | n &= \log_b(y) \iff b^n = y. | ||
| Line 192: | Line 204: | ||
From here, we can relate <math>b^m</math> (i.e. <math>x</math>) and <math>b^n</math> (i.e. <math>y</math>) using exponent laws as | From here, we can relate <math>b^m</math> (i.e. <math>x</math>) and <math>b^n</math> (i.e. <math>y</math>) using exponent laws as | ||
<math display="block">\frac{x}{y} = \frac{(b^m)}{(b^n)} = \frac{b^m}{b^n} = b^{m - n}</math> | |||
To recover the logarithms, we apply <math>\log_b</math> to both sides of the equality. | To recover the logarithms, we apply <math>\log_b</math> to both sides of the equality. | ||
<math display="block">\log_b \left( \frac{x}{y} \right) = \log_b \left( b^{m -n} \right)</math> | |||
The right side may be simplified using one of the logarithm properties from before: we know that <math>\log_b(b^{m - n}) = m - n</math>, giving | The right side may be simplified using one of the logarithm properties from before: we know that <math>\log_b(b^{m - n}) = m - n</math>, giving | ||
<math display="block">\log_b \left( \frac{x}{y} \right) = m -n</math> | |||
We now resubstitute the values for <math>m</math> and <math>n</math> into our equation, so our final expression is only in terms of <math>x</math>, <math>y</math>, and <math>b</math>. | We now resubstitute the values for <math>m</math> and <math>n</math> into our equation, so our final expression is only in terms of <math>x</math>, <math>y</math>, and <math>b</math>. | ||
<math display="block">\log_b \left( \frac{x}{y} \right) = \log_b(x) - \log_b(y)</math> | |||
This completes the derivation. | This completes the derivation. | ||
| Line 212: | Line 224: | ||
To state the ''logarithm of a power'' law formally: | To state the ''logarithm of a power'' law formally: | ||
<math display="block">\forall b \in \mathbb{R}_+, b \neq 1, \forall x \in \mathbb{R}_+, \forall r \in \mathbb{R}, \log_b(x^r) = r\log_b(x)</math> | |||
Derivation: | Derivation: | ||
| Line 220: | Line 232: | ||
To more easily manipulate the expression, we rewrite it as an exponential. By definition, <math>m = \log_b(x) \iff b^m = x</math>, so we have | To more easily manipulate the expression, we rewrite it as an exponential. By definition, <math>m = \log_b(x) \iff b^m = x</math>, so we have | ||
<math display="block">b^m = x</math> | |||
Similar to the derivations above, we take advantage of another exponent law. In order to have <math>x^r</math> in our final expression, we raise both sides of the equality to the power of <math>r</math>: | Similar to the derivations above, we take advantage of another exponent law. In order to have <math>x^r</math> in our final expression, we raise both sides of the equality to the power of <math>r</math>: | ||
<math display="block"> | |||
\begin{align} | \begin{align} | ||
(b^m)^r &= (x)^r \\ | (b^m)^r &= (x)^r \\ | ||
| Line 235: | Line 247: | ||
To recover the logarithms, we apply <math>\log_b</math> to both sides of the equality. | To recover the logarithms, we apply <math>\log_b</math> to both sides of the equality. | ||
<math display="block">\log_b(b^{mr}) = \log_b(x^r)</math> | |||
The left side of the equality can be simplified using a logarithm law, which states that <math>\log_b(b^{mr}) = mr</math>. | The left side of the equality can be simplified using a logarithm law, which states that <math>\log_b(b^{mr}) = mr</math>. | ||
<math display="block">mr = \log_b(x^r)</math> | |||
Substituting in the original value for <math>m</math>, rearranging, and simplifying gives | Substituting in the original value for <math>m</math>, rearranging, and simplifying gives | ||
<math display="block"> | |||
\begin{align} | \begin{align} | ||
\left( \log_b(x) \right)r &= \log_b(x^r) \\ | \left( \log_b(x) \right)r &= \log_b(x^r) \\ | ||
| Line 256: | Line 268: | ||
To state the change of base logarithm formula formally: | To state the change of base logarithm formula formally: | ||
<math display="block">\forall a, b \in \mathbb{R}_+, a, b \neq 1, \forall x \in \mathbb{R}_+, \log_b(x) = \frac{\log_a(x)}{\log_a(b)}</math> | <math display="block">\forall a, b \in \mathbb{R}_+, a, b \neq 1, \forall x \in \mathbb{R}_+, \log_b(x) = \frac{\log_a(x)}{\log_a(b)}</math> | ||
This identity is useful to evaluate logarithms on calculators. For instance, most calculators have buttons for [[Natural logarithm|ln]] and for [[Common logarithm|log<sub>10</sub>]], but not all calculators have buttons for the logarithm of an arbitrary base. | This identity is useful to evaluate logarithms on calculators. For instance, most calculators have buttons for [[Natural logarithm|ln]] and for [[Common logarithm|log<sub>10</sub>]], but not all calculators have buttons for the logarithm of an arbitrary base. | ||
=== Proof | === Proof and derivation === | ||
Let <math>a, b \in \mathbb{R}_+</math>, where <math>a, b \neq 1</math> Let <math>x \in \mathbb{R}_+</math>. Here, <math>a</math> and <math>b</math> are the two bases we will be using for the logarithms. They cannot be 1, because the logarithm function is not well defined for the base of 1.{{Citation needed|date=July 2022}} The number <math>x</math> will be what the logarithm is evaluating, so it must be a positive number. Since we will be dealing with the term <math>\log_b(x)</math> quite frequently, we define it as a new variable: Let <math>m = \log_b(x)</math>. | Let <math>a, b \in \mathbb{R}_+</math>, where <math>a, b \neq 1</math> Let <math>x \in \mathbb{R}_+</math>. Here, <math>a</math> and <math>b</math> are the two bases we will be using for the logarithms. They cannot be 1, because the logarithm function is not well defined for the base of 1.{{Citation needed|date=July 2022}} The number <math>x</math> will be what the logarithm is evaluating, so it must be a positive number. Since we will be dealing with the term <math>\log_b(x)</math> quite frequently, we define it as a new variable: Let <math>m = \log_b(x)</math>. | ||
| Line 268: | Line 281: | ||
Applying <math>\log_a</math> to both sides of the equality, | Applying <math>\log_a</math> to both sides of the equality, | ||
<math display="block">\log_a(b^m) = \log_a(x) </math> | <math display="block">\log_a(b^m) = \log_a(x) </math> | ||
Now, using the logarithm of a power property, which states that <math>\log_a(b^m) = m\log_a(b)</math>, | Now, using the logarithm of a power property, which states that <math>\log_a(b^m) = m\log_a(b)</math>, | ||
<math display="block">m\log_a(b) = \log_a(x)</math> | <math display="block">m\log_a(b) = \log_a(x)</math> | ||
Isolating <math>m</math>, we get the following: | Isolating <math>m</math>, we get the following: | ||
<math display="block">m = \frac{\log_a(x)}{\log_a(b)}</math> | <math display="block">m = \frac{\log_a(x)}{\log_a(b)}</math> | ||
Resubstituting <math>m = \log_b(x)</math> back into the equation, | Resubstituting <math>m = \log_b(x)</math> back into the equation, | ||
<math display="block">\log_b(x) = \frac{\log_a(x)}{\log_a(b)}</math> | <math display="block">\log_b(x) = \frac{\log_a(x)}{\log_a(b)}</math> | ||
| Line 284: | Line 301: | ||
<math display="block"> \log_b a = \frac 1 {\log_a b} </math> | <math display="block"> \log_b a = \frac 1 {\log_a b} </math> | ||
<math display="block"> \log_{b^n} a = {\log_b a \over n} </math> | <math display="block"> \log_{b^n} a = {\log_b a \over n} </math> | ||
<math display="block"> \log_{b} a = \log_b e \cdot \log_e a = \log_b e \cdot \ln a </math> | <math display="block"> \log_{b} a = \log_b e \cdot \log_e a = \log_b e \cdot \ln a </math> | ||
<math display="block"> b^{\log_a d} = d^{\log_a b} </math> | <math display="block"> b^{\log_a d} = d^{\log_a b} </math> | ||
<math display="block"> -\log_b a = \log_b \left({1 \over a}\right) = \log_{1/b} a</math> | <math display="block"> -\log_b a = \log_b \left({1 \over a}\right) = \log_{1/b} a</math> | ||
| Line 299: | Line 312: | ||
where <math display="inline">\pi</math> is any [[permutation]] of the subscripts {{math|1, ..., ''n''}}. For example | where <math display="inline">\pi</math> is any [[permutation]] of the subscripts {{math|1, ..., ''n''}}. For example | ||
<math display="block"> \log_b w\cdot \log_a x \cdot \log_d c \cdot \log_d z | <math display="block"> \log_b w\cdot \log_a x \cdot \log_d c \cdot \log_d z | ||
= \log_d w \cdot \log_b x \cdot \log_a c \cdot \log_d z. </math> | = \log_d w \cdot \log_b x \cdot \log_a c \cdot \log_d z. </math> | ||
=== Summation | === Summation and subtraction === | ||
The following summation | The following summation and subtraction rule is especially useful in [[probability theory]] when one is dealing with a sum of log-probabilities: | ||
{| | {| | ||
| Line 318: | Line 332: | ||
More generally: | More generally: | ||
<math display="block">\log _b \sum_{i=0}^N a_i = \log_b a_0 + \log_b \left( 1+\sum_{i=1}^N \frac{a_i}{a_0} \right) = \log _b a_0 + \log_b \left( 1+\sum_{i=1}^N b^{\left( \log_b a_i - \log _b a_0 \right)} \right)</math> | <math display="block">\log _b \sum_{i=0}^N a_i = \log_b a_0 + \log_b \left( 1+\sum_{i=1}^N \frac{a_i}{a_0} \right) = \log _b a_0 + \log_b \left( 1+\sum_{i=1}^N b^{\left( \log_b a_i - \log _b a_0 \right)} \right)</math> | ||
=== Exponents === | === Exponents === | ||
A useful identity involving exponents: | A useful identity involving exponents: | ||
<math display="block"> x^{\frac{\log(\log(x))}{\log(x)}} = \log(x)</math> | <math display="block"> x^{\frac{\log(\log(x))}{\log(x)}} = \log(x)</math> | ||
or more universally: | or more universally: | ||
<math display="block"> x^{\frac{\log(a)}{\log(x)}} = a</math> | <math display="block"> x^{\frac{\log(a)}{\log(x)}} = a</math> | ||
=== Other | === Other or resulting identities === | ||
<math display="block"> \frac{1}{\frac{1}{\log_x(a)} + \frac{1}{\log_y(a)}} = \log_{xy}(a)</math> | <math display="block"> \frac{1}{\frac{1}{\log_x(a)} + \frac{1}{\log_y(a)}} = \log_{xy}(a)</math> | ||
<math display="block"> \frac{1}{\frac{1}{\log_x(a)}-\frac{1}{\log_y(a)}} = \log_{\frac{x}{y}}(a)</math> | <math display="block"> \frac{1}{\frac{1}{\log_x(a)}-\frac{1}{\log_y(a)}} = \log_{\frac{x}{y}}(a)</math> | ||
| Line 346: | Line 364: | ||
| year = 2007}}</ref> | | year = 2007}}</ref> | ||
<math display="block">\frac{x}{1+x} \leq \ln(1+x) | |||
\leq \frac{x(6+x)}{6+4x} | \leq \frac{x(6+x)}{6+4x} | ||
\leq x \mbox{ for all } {-1} < x</math> | \leq x \mbox{ for all } {-1} < x</math> | ||
<math display="block">\begin{align} | |||
\frac{2x}{2+x}&\leq3-\sqrt{\frac{27}{3+2x}}\leq\frac{x}{\sqrt{1+x+x^2/12}} \\[4pt] | \frac{2x}{2+x}&\leq3-\sqrt{\frac{27}{3+2x}}\leq\frac{x}{\sqrt{1+x+x^2/12}} \\[4pt] | ||
&\leq \ln(1+x)\leq \frac{x}{\sqrt{1+x}}\leq \frac{x}{2}\frac{2+x}{1+x} \\[4pt] | &\leq \ln(1+x)\leq \frac{x}{\sqrt{1+x}}\leq \frac{x}{2}\frac{2+x}{1+x} \\[4pt] | ||
| Line 359: | Line 376: | ||
== Tropical identities == | == Tropical identities == | ||
The following identity relates [[log semiring]] to the [[min-plus semiring]].<math display="block">\lim_{T \rightarrow 0} -T\log(e^{-\frac{s}{T}} + e^{-\frac{t}{T}}) = \mathrm{min}\{s,t\}</math> | The following identity relates [[log semiring]] to the [[min-plus semiring]]. | ||
<math display="block">\lim_{T \rightarrow 0} -T\log(e^{-\frac{s}{T}} + e^{-\frac{t}{T}}) = \mathrm{min}\{s,t\}</math> | |||
== Calculus identities == | == Calculus identities == | ||
=== | === Limits === | ||
<math display="block">\lim_{x\to 0^+}\log_a(x)=-\infty\quad \mbox{if } a > 1</math> | |||
<math display="block">\lim_{x\to 0^+}\log_a(x)=\infty\quad \mbox{if } 0 < a < 1</math> | |||
<math display="block">\lim_{x\to\infty}\log_a(x)=\infty\quad \mbox{if } a > 1</math> | |||
<math display="block">\lim_{x\to\infty}\log_a(x)=-\infty\quad \mbox{if } 0 < a < 1</math> | |||
<math display="block">\lim_{x\to \infty}x^b\log_a(x)=\infty\quad \mbox{if } b > 0</math> | |||
<math display="block">\lim_{x\to\infty}\frac{\log_a(x)}{x^b}=0\quad \mbox{if } b > 0</math> | |||
The last [[Limit of a function|limit]] is often summarized as "logarithms grow more slowly than any power or root of ''x''". | |||
=== Derivatives of logarithmic functions === | |||
<math display="block">{d \over dx} \ln x = {1 \over x }, x > 0</math> | |||
<math display="block">{d \over dx} \ln |x| = {1 \over x }, x \neq 0</math> | |||
<math display="block">{d \over dx} \log_a x = {1 \over x \ln a}, x > 0, a > 0, \text{ and } a\neq 1</math> | |||
=== | |||
=== Integral definition === | === Integral definition === | ||
<math display="block">\ln x = \int_1^x \frac {1}{t}\ dt </math> | |||
To modify the limits of integration to run from <math>x</math> to <math>1</math>, we change the order of integration, which changes the sign of the integral: | To modify the limits of integration to run from <math>x</math> to <math>1</math>, we change the order of integration, which changes the sign of the integral: | ||
<math display="block">-\int_1^x \frac{1}{t} \, dt = \int_x^1 \frac{1}{t} \, dt</math> | |||
Therefore: | Therefore: | ||
== | <math display="block">\ln \frac{1}{x} = \int_x^1 \frac{1}{t} \, dt</math> | ||
=== Riemann Sum === | |||
<math display="block">\ln(n + 1) = </math> | |||
<math display="block">\lim_{k \to \infty} \sum_{i=1}^{k} \frac{1}{x_i} \Delta x = </math> | |||
<math display="block">\lim_{k \to \infty} \sum_{i=1}^{k} \frac{1}{1 + \frac{i-1}{k}n} \cdot \frac{n}{k} =</math> | |||
<math display="block">\lim_{k \to \infty} \sum_{x=1}^{k \cdot n} \frac{1}{1 + \frac{x}{k}} \cdot \frac{1}{k} =</math> | |||
<math display="block">\lim_{k \to \infty} \sum_{x=1}^{k \cdot n} \frac{1}{k + x} = \lim_{k \to \infty} \sum_{x=k+1}^{k \cdot n + k} \frac{1}{x} = \lim_{k \to \infty} \sum_{x=k+1}^{k (n + 1)} \frac{1}{x}</math> | |||
for <math>\textstyle \Delta x = \frac{n}{k}</math> and <math>x_{i}</math> is a sample point in each interval. | for <math>\textstyle \Delta x = \frac{n}{k}</math> and <math>x_{i}</math> is a sample point in each interval. | ||
| Line 415: | Line 431: | ||
<math display="block">\ln(1 + x) = \sum_{n=1}^{\infty} \frac{(-1)^{n+1} x^n}{n} = x - \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4} + \frac{x^5}{5} - \frac{x^6}{6} + \cdots.</math> | <math display="block">\ln(1 + x) = \sum_{n=1}^{\infty} \frac{(-1)^{n+1} x^n}{n} = x - \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4} + \frac{x^5}{5} - \frac{x^6}{6} + \cdots.</math> | ||
Within this interval, for <math>x = 1</math>, the series is [[Conditional convergence|conditionally convergent]], and for all other values, it is [[Absolute convergence|absolutely convergent]]. For <math>x > 1</math> or <math>x \leq -1</math>, the series does not converge to <math>\ln(1 + x)</math>. In these cases, different representations<ref>To extend the utility of the Mercator series beyond its conventional bounds one can calculate <math display="inline"> \ln(1 + x)</math> for <math display="inline"> x = -\frac{n}{n+1}</math> and <math display="inline"> n \geq 0</math> and then [[# | Within this interval, for <math>x = 1</math>, the series is [[Conditional convergence|conditionally convergent]], and for all other values, it is [[Absolute convergence|absolutely convergent]]. For <math>x > 1</math> or <math>x \leq -1</math>, the series does not converge to <math>\ln(1 + x)</math>. In these cases, different representations<ref>To extend the utility of the Mercator series beyond its conventional bounds one can calculate <math display="inline"> \ln(1 + x)</math> for <math display="inline"> x = -\frac{n}{n+1}</math> and <math display="inline"> n \geq 0</math> and then [[#Using simpler operations|negate the result]], <math display="inline"> \ln\left(\frac{1}{n+1}\right)</math>, to derive <math display="inline"> \ln(n + 1)</math>. [[Natural logarithm of 2#Binary rising constant factorial|For example]], setting <math display="inline"> x = -\frac{1}{2}</math> yields <math display="inline"> \ln 2 = \sum_{n=1}^{\infty} \frac{1}{n 2^n}</math>.</ref> or methods must be used to evaluate the logarithm. | ||
=== Harmonic number difference === | === Harmonic number difference === | ||
It is not uncommon in advanced mathematics, particularly in [[analytic number theory]] and [[asymptotic analysis]], to encounter expressions involving differences or ratios of [[ | It is not uncommon in advanced mathematics, particularly in [[analytic number theory]] and [[asymptotic analysis]], to encounter expressions involving differences or ratios of [[harmonic number]]s at scaled indices.<ref name="Flajolet2009AnalyticCombinatorics">{{cite book | ||
| last1 = Flajolet | | last1 = Flajolet | ||
| first1 = Philippe | | first1 = Philippe | ||
| Line 430: | Line 446: | ||
| page = 389 | | page = 389 | ||
}} See page 117, and VI.8 definition of shifted harmonic numbers on page 389 | }} See page 117, and VI.8 definition of shifted harmonic numbers on page 389 | ||
</ref> The identity involving the limiting difference between harmonic numbers at scaled indices and its relationship to the logarithmic function provides an intriguing example of how discrete sequences can asymptotically relate to [[ | </ref> The identity involving the limiting difference between harmonic numbers at scaled indices and its relationship to the logarithmic function provides an intriguing example of how discrete sequences can asymptotically relate to [[continuous functions]]s. This identity is expressed as<ref name="Deveci2022DoubleSeries">{{cite arXiv | ||
|last=Deveci | |last=Deveci | ||
|first=Sinan | |first=Sinan | ||
| Line 439: | Line 455: | ||
}} See Theorem 5.2. on pages 22 - 23</ref> | }} See Theorem 5.2. on pages 22 - 23</ref> | ||
<math display="block">\lim_{{k \to \infty}} (H_{{k(n+1)}} - H_k) = \ln(n+1)</math> | |||
which characterizes the behavior of harmonic numbers as they grow large. This approximation (which precisely equals <math>\ln(n+1)</math> in the limit) reflects how summation over increasing segments of the harmonic series exhibits [[Harmonic number#Calculation|integral properties]], giving insight into the interplay between discrete and continuous analysis. It also illustrates how understanding the behavior of sums and series at large scales can lead to insightful conclusions about their properties. Here <math>H_k</math> denotes the <math>k</math>-th harmonic number, defined as | which characterizes the behavior of harmonic numbers as they grow large. This approximation (which precisely equals <math>\ln(n+1)</math> in the limit) reflects how summation over increasing segments of the harmonic series exhibits [[Harmonic number#Calculation|integral properties]], giving insight into the interplay between discrete and continuous analysis. It also illustrates how understanding the behavior of sums and series at large scales can lead to insightful conclusions about their properties. Here <math>H_k</math> denotes the <math>k</math>-th harmonic number, defined as | ||
<math display="block">H_k = \sum_{{j=1}}^k \frac{1}{j}</math> | |||
The harmonic numbers are a fundamental sequence in number theory and analysis, known for their logarithmic growth. This result leverages the fact that the sum of the inverses of integers (i.e., harmonic numbers) can be closely approximated by the natural logarithm function, plus a [[Euler's constant|constant]], especially when extended over large intervals.<ref>{{cite book | The harmonic numbers are a fundamental sequence in number theory and analysis, known for their logarithmic growth. This result leverages the fact that the sum of the inverses of integers (i.e., harmonic numbers) can be closely approximated by the natural logarithm function, plus a [[Euler's constant|constant]], especially when extended over large intervals.<ref>{{cite book | ||
| Line 493: | Line 508: | ||
</ref> The technique of approximating sums by integrals (specifically using the [[Integral test for convergence|integral test]] or by direct integral approximation) is fundamental in deriving such results. This specific identity can be a consequence of these approximations, considering: | </ref> The technique of approximating sums by integrals (specifically using the [[Integral test for convergence|integral test]] or by direct integral approximation) is fundamental in deriving such results. This specific identity can be a consequence of these approximations, considering: | ||
<math display="block">\sum_{{j=k+1}}^{k(n+1)} \frac{1}{j} \approx \int_k^{k(n+1)} \frac{dx}{x}</math> | |||
==== Harmonic limit derivation ==== | ==== Harmonic limit derivation ==== | ||
| Line 499: | Line 514: | ||
The limit explores the growth of the harmonic numbers when indices are multiplied by a scaling factor and then differenced. It specifically captures the sum from <math>k+1</math> to <math>k(n+1)</math>: | The limit explores the growth of the harmonic numbers when indices are multiplied by a scaling factor and then differenced. It specifically captures the sum from <math>k+1</math> to <math>k(n+1)</math>: | ||
<math display="block">H_{{k(n+1)}} - H_k = \sum_{{j=k+1}}^{k(n+1)} \frac{1}{j}</math> | |||
This can be estimated using the integral test for convergence, or more directly by comparing it to the [[#Riemann Sum|integral]] of <math>1/x</math> from <math>k</math> to <math>k(n+1)</math>: | This can be estimated using the integral test for convergence, or more directly by comparing it to the [[#Riemann Sum|integral]] of <math>1/x</math> from <math>k</math> to <math>k(n+1)</math>: | ||
<math display="block">\lim_{{k \to \infty}} \sum_{{j=k+1}}^{k(n+1)} \frac{1}{j} = \int_k^{k(n+1)} \frac{dx}{x} = \ln(k(n+1)) - \ln(k) = \ln\left(\frac{k(n+1)}{k}\right) = \ln(n+1)</math> | |||
As the window's lower bound begins at <math>k+1</math> and the upper bound extends to <math>k(n+1)</math>, both of which tend toward infinity as <math>k \to \infty</math>, the summation window encompasses an increasingly vast portion of the smallest possible terms of the harmonic series (those with astronomically large denominators), creating a discrete sum that stretches towards infinity, which mirrors how continuous integrals accumulate value across an infinitesimally fine partitioning of the domain. In the limit, the interval is effectively from <math>1</math> to <math>n+1</math> where the onset <math>k</math> implies this minimally discrete region. | As the window's lower bound begins at <math>k+1</math> and the upper bound extends to <math>k(n+1)</math>, both of which tend toward infinity as <math>k \to \infty</math>, the summation window encompasses an increasingly vast portion of the smallest possible terms of the harmonic series (those with astronomically large denominators), creating a discrete sum that stretches towards infinity, which mirrors how continuous integrals accumulate value across an infinitesimally fine partitioning of the domain. In the limit, the interval is effectively from <math>1</math> to <math>n+1</math> where the onset <math>k</math> implies this minimally discrete region. | ||
=== Double series formula === | === Double series formula === | ||
The [[# | The [[#Harmonic number difference|harmonic number difference]] formula for <math>\ln(m)</math> is an extension<ref name="Deveci2022DoubleSeries"/> of the classic, [[Harmonic series (mathematics)#Alternating harmonic series|alternating identity]] of <math>\ln(2)</math>: | ||
<math display="block">\ln(2) = \lim_{k \to \infty} \sum_{n=1}^{k} \left( \frac{1}{2n-1} - \frac{1}{2n} \right)</math> | |||
which can be generalized as the double series over the [[Modular_arithmetic#Integers_modulo_m|residues]] of <math>m</math>: | which can be generalized as the double series over the [[Modular_arithmetic#Integers_modulo_m|residues]] of <math>m</math>: | ||
<math display="block">\ln(m) = \sum_{x \in \langle m \rangle \cap \mathbb{N}} \sum_{r \in \mathbb{Z}_m \cap \mathbb{N}} \left( \frac{1}{x-r} - \frac{1}{x} \right) = \sum_{x \in \langle m \rangle \cap \mathbb{N}} \sum_{r \in \mathbb{Z}_m \cap \mathbb{N}} \frac{r}{x(x-r)} | |||
</math> | </math> | ||
where <math>\langle m \rangle</math> is the [[Principal_ideal|principle ideal]] generated by <math>m</math>. Subtracting <math>\textstyle \frac{1}{x}</math> from each term <math>\textstyle \frac{1}{x-r}</math> (i.e., balancing each term with the modulus) reduces the magnitude of each term's contribution, ensuring [[ | where <math>\langle m \rangle</math> is the [[Principal_ideal|principle ideal]] generated by <math>m</math>. Subtracting <math>\textstyle \frac{1}{x}</math> from each term <math>\textstyle \frac{1}{x-r}</math> (i.e., balancing each term with the modulus) reduces the magnitude of each term's contribution, ensuring [[Alternating series test|convergence]] by controlling the series' tendency toward divergence as <math>m</math> increases. For example: | ||
<math display="block">\ln(4) = \lim_{k \to \infty} \sum_{n=1}^{k} \left( \frac{1}{4n-3} - \frac{1}{4n} \right) + \left( \frac{1}{4n-2} - \frac{1}{4n} \right) + \left( \frac{1}{4n-1} - \frac{1}{4n} \right)</math> | |||
This method leverages the fine differences between closely related terms to stabilize the series. The sum over all residues <math>r \in \N</math> ensures that adjustments are uniformly applied across all possible offsets within each block of <math>m</math> terms. This uniform distribution of the "correction" across different intervals defined by <math>x-r</math> functions similarly to [[ | This method leverages the fine differences between closely related terms to stabilize the series. The sum over all residues <math>r \in \N</math> ensures that adjustments are uniformly applied across all possible offsets within each block of <math>m</math> terms. This uniform distribution of the "correction" across different intervals defined by <math>x-r</math> functions similarly to [[Telescoping series|telescoping]] over a very large sequence. It helps to flatten out the discrepancies that might otherwise lead to divergent behavior in a straightforward harmonic series. Note that the structure of the [[Addition#summand|summands]] of this formula matches those of the [[Harmonic number#Alternative, asymptotic formulation|interpolated harmonic number]] <math>H_x</math> when both the [[Domain of a function|domain]] and [[Range of a function|range]] are negated (i.e., <math>-H_{-x}</math>). However, the interpretation and roles of the variables differ. | ||
==== Deveci's Proof ==== | ==== Deveci's Proof ==== | ||
A fundamental feature of the proof is the accumulation of the [[Subtraction#Notation and terminology|subtrahends]] <math display="inline"> \frac{1}{x}</math> into a unit fraction, that is, <math display="inline"> \frac{m}{x} = \frac{1}{n}</math> for <math>m \mid x</math>, thus <math>m = \omega + 1</math> rather than <math>m = |\mathbb{Z}_{m} \cap \mathbb{N}|</math>, where the [[Maximum and minimum|extrema]] of <math>\mathbb{Z}_{m} \cap \mathbb{N}</math> are <math>[0, \omega]</math> if <math>\mathbb{N} = \mathbb{N}_{0}</math> and <math>[1, \omega]</math> [[Natural number#Notation|otherwise]], with the minimum of <math>0</math> being implicit in the latter case due to the structural requirements of the proof. Since the [[Cardinality# | A fundamental feature of the proof is the accumulation of the [[Subtraction#Notation and terminology|subtrahends]] <math display="inline"> \frac{1}{x}</math> into a unit fraction, that is, <math display="inline"> \frac{m}{x} = \frac{1}{n}</math> for <math>m \mid x</math>, thus <math>m = \omega + 1</math> rather than <math>m = |\mathbb{Z}_{m} \cap \mathbb{N}|</math>, where the [[Maximum and minimum|extrema]] of <math>\mathbb{Z}_{m} \cap \mathbb{N}</math> are <math>[0, \omega]</math> if <math>\mathbb{N} = \mathbb{N}_{0}</math> and <math>[1, \omega]</math> [[Natural number#Notation|otherwise]], with the minimum of <math>0</math> being implicit in the latter case due to the structural requirements of the proof. Since the [[Cardinality#Comparing sets|cardinality]] of <math>\mathbb{Z}_{m} \cap \mathbb{N}</math> depends on the selection of one of two possible minima, the integral <math>\textstyle \int \frac{1}{t} dt</math>, as a set-theoretic procedure, is a function of the maximum <math>\omega</math> (which remains consistent across both interpretations) plus <math>1</math>, not the cardinality (which is ambiguous<ref>{{Cite arXiv |eprint=1102.0418 |first=Peter |last=Harremoës |title=Is Zero a Natural Number? |year=2011|class=math.HO }} A synopsis on the nature of 0 which frames the choice of minimum as the dichotomy between ordinals and cardinals.</ref><ref>{{Cite journal |last=Barton |first=N. |year=2020 |title=Absence perception and the philosophy of zero |journal=Synthese |volume=197 |issue=9 |pages=3823–3850 |doi=10.1007/s11229-019-02220-x |pmc=7437648 |pmid=32848285}} See section 3.1</ref> due to varying definitions of the minimum). Whereas the harmonic number difference computes the integral in a global sliding window, the double series, in parallel, computes the sum in a local sliding window—a shifting <math>m</math>-tuple—over the harmonic series, advancing the window by <math>m</math> positions to select the next <math>m</math>-tuple, and offsetting each element of each tuple by <math display="inline"> \frac{1}{m}</math> relative to the window's absolute position. The sum <math display="inline">\sum_{n=1}^{k} \sum \frac{1}{x - r}</math> corresponds to <math>H_{km}</math> which scales <math>H_{m}</math> without bound. The sum <math display="inline"> \sum_{n=1}^{k} -\frac{1}{n}</math> corresponds to the prefix <math>H_{k}</math> trimmed from the series to establish the window's moving lower bound <math>k+1</math>, and <math>\ln(m)</math> is the limit of the sliding window (the scaled, truncated<ref>The <math>k+1</math> shift is characteristic of the [[#Riemann Sum|right Riemann sum]] employed to prevent the integral from degenerating into the harmonic series, thereby averting divergence. Here, <math display="inline"> -\frac{1}{n}</math> functions analogously, serving to regulate the series. The successor operation <math>m = \omega + 1</math> signals the implicit inclusion of the modulus <math>m</math> (the region omitted from <math>\mathbb{N}_{1}</math>). The importance of this, from an axiomatic perspective, becomes evident when the residues of <math>m</math> are formulated as <math>e^{\ln(\omega + 1)}</math>, where <math>\omega + 1</math> is bootstrapped by <math>\omega = 0</math> to produce the residues of modulus <math>m = \omega = \omega_{0} + 1 = 1</math>. Consequently, <math>\omega</math> represents a limiting value in this context.</ref> series): | ||
<math display="block">\begin{align} | <math display="block">\begin{align} | ||
| Line 538: | Line 553: | ||
<math display="block">\lim_{k \to \infty} H_{km} - H_k = \sum_{x \in \langle m \rangle \cap \mathbb{N}} \sum_{r \in \mathbb{Z}_m \cap \mathbb{N}} \left( \frac{1}{x-r} - \frac{1}{x} \right) = \ln(\omega + 1) = \ln(m)</math> | <math display="block">\lim_{k \to \infty} H_{km} - H_k = \sum_{x \in \langle m \rangle \cap \mathbb{N}} \sum_{r \in \mathbb{Z}_m \cap \mathbb{N}} \left( \frac{1}{x-r} - \frac{1}{x} \right) = \ln(\omega + 1) = \ln(m)</math> | ||
=== | === Integrals of logarithmic functions === | ||
<math display="block">\int \ln x \, dx = x \ln x - x + C = x(\ln x - 1) + C</math> | |||
<math display="block">\int \log_a x \, dx = x \log_a x - \frac{x}{\ln a} + C = \frac{x (\ln x - 1)}{\ln a} + C</math> | |||
To remember higher [[integral]]s, it is convenient to define: | |||
<math display="block">x^{\left [n \right]} = x^{n}(\log(x) - H_n)</math> | |||
where <math>H_n</math> is the ''n''<sup>th</sup> [[harmonic number]]: | where <math>H_n</math> is the ''n''<sup>th</sup> [[harmonic number]]: | ||
<math display="block">x^{\left [ 0 \right ]} = \log x</math> | |||
<math display="block">x^{\left [ 1 \right ]} = x \log(x) - x</math> | |||
<math display="block">x^{\left [ 2 \right ]} = x^2 \log(x) - \begin{matrix} \frac{3}{2} \end{matrix}x^2</math> | |||
<math display="block">x^{\left [ 3 \right ]} = x^3 \log(x) - \begin{matrix} \frac{11}{6} \end{matrix}x^3</math> | |||
Then: | |||
<math display="block">\frac{d}{dx}\, x^{\left[ n \right]} = nx^{\left[ n-1 \right]}</math> | |||
<math display="block">\int x^{\left[ n \right]}\,dx = \frac{x^{\left [ n+1 \right ]}}{n+1} + C</math> | |||
== Approximating large numbers == | == Approximating large numbers == | ||
| Line 582: | Line 600: | ||
When ''k'' is any integer: | When ''k'' is any integer: | ||
<math display="block">\log(z) = \ln(|z|) + i \arg(z)</math> | |||
<math display="block">\log(z) = \operatorname{Log}(z) + 2 \pi i k</math> | |||
<math display="block">e^{\log(z)} = z</math> | |||
=== Constants === | === Constants === | ||
| Line 590: | Line 608: | ||
Principal value forms: | Principal value forms: | ||
<math display="block">\operatorname{Log}(1) = 0</math> | |||
<math display="block">\operatorname{Log}(e) = 1</math> | |||
Multiple value forms, for any ''k'' an integer: | Multiple value forms, for any ''k'' an integer: | ||
<math display="block">\log(1) = 0 + 2 \pi i k</math> | |||
<math display="block">\log(e) = 1 + 2 \pi i k</math> | |||
=== Summation === | === Summation === | ||
Principal value forms: | Principal value forms:<ref name=":0">{{Cite book|last=Abramowitz|first=Milton|url=https://www.worldcat.org/oclc/429082|title=Handbook of mathematical functions, with formulas, graphs, and mathematical tables|date=1965|publisher=Dover Publications|others=Irene A. Stegun|isbn=0-486-61272-4|location=New York|oclc=429082}}</ref> | ||
<math display="block">\operatorname{Log}(z_1) + \operatorname{Log}(z_2) = \operatorname{Log}(z_1 z_2) \pmod {2 \pi i}</math> | |||
<math display="block">\operatorname{Log}(z_1) + \operatorname{Log}(z_2) = \operatorname{Log}(z_1 z_2)\quad | |||
(-\pi <\operatorname{Arg}(z_1)+\operatorname{Arg}(z_2)\leq \pi; | (-\pi <\operatorname{Arg}(z_1)+\operatorname{Arg}(z_2)\leq \pi; | ||
\text{ e.g., } \operatorname{Re}z_1\geq 0 \text{ and } \operatorname{Re}z_2 > 0)</math>< | \text{ e.g., } \operatorname{Re}z_1\geq 0 \text{ and } \operatorname{Re}z_2 > 0)</math> | ||
<math display="block">\operatorname{Log}(z_1) - \operatorname{Log}(z_2) = \operatorname{Log}(z_1 / z_2) \pmod {2 \pi i}</math> | |||
<math display="block">\operatorname{Log}(z_1) - \operatorname{Log}(z_2) = \operatorname{Log}(z_1 / z_2) | |||
\quad | \quad | ||
(-\pi <\operatorname{Arg}(z_1)-\operatorname{Arg}(z_2)\leq \pi; | (-\pi <\operatorname{Arg}(z_1)-\operatorname{Arg}(z_2)\leq \pi; | ||
\text{ e.g., } \operatorname{Re}z_1\geq 0 \text{ and } \operatorname{Re}z_2 > 0)</math | \text{ e.g., } \operatorname{Re}z_1\geq 0 \text{ and } \operatorname{Re}z_2 > 0)</math> | ||
Multiple value forms: | Multiple value forms: | ||
<math display="block">\log(z_1) + \log(z_2) = \log(z_1 z_2)</math> | |||
<math display="block">\log(z_1) - \log(z_2) = \log(z_1 / z_2)</math> | |||
=== Powers === | === Powers === | ||
| Line 623: | Line 641: | ||
Principal value form: | Principal value form: | ||
<math display="block">{z_1}^{z_2} = e^{z_2 \operatorname{Log}(z_1)} </math> | |||
<math display="block">\operatorname{Log}{\left({z_1}^{z_2}\right)} = z_2 \operatorname{Log}(z_1) \pmod {2 \pi i}</math> | |||
Multiple value forms: | Multiple value forms: | ||
<math display="block">{z_1}^{z_2} = e^{z_2 \log(z_1)}</math> | |||
Where {{math|''k''<sub>1</sub>}}, {{math|''k''<sub>2</sub>}} are any integers: | Where {{math|''k''<sub>1</sub>}}, {{math|''k''<sub>2</sub>}} are any integers: | ||
<math display="block">\log{\left({z_1}^{z_2}\right)} = z_2 \log(z_1) + 2 \pi i k_2</math> | |||
<math display="block">\log{\left({z_1}^{z_2}\right)} = z_2 \operatorname{Log}(z_1) + z_2 2 \pi i k_1 + 2 \pi i k_2</math> | |||
== Asymptotic identities == | == Asymptotic identities == | ||
=== | === Pronic numbers === | ||
As a consequence of the [[# | As a consequence of the [[#Harmonic number difference|harmonic number difference]], the natural logarithm is [[Asymptotic analysis|asymptotically]] approximated by a finite [[Series (mathematics)|series]] difference,<ref name="Deveci2022DoubleSeries"/> representing a truncation of the [[#Riemann Sum|integral]] at <math>k = n</math>: | ||
<math display="block">H_{{2T[n]}} - H_n \sim \ln(n+1)</math> | |||
where <math>T[n]</math> is the {{mvar|n}}th [[triangular number]], and <math>2T[n]</math> is the sum of the [[ | where <math>T[n]</math> is the {{mvar|n}}th [[triangular number]], and <math>2T[n]</math> is the sum of the [[Pronic number#As figurate numbers|first {{mvar|n}} even integers]]. Since the {{mvar|n}}th [[pronic number]] is asymptotically equivalent to the {{mvar|n}}th [[Square number|perfect square]], it follows that: | ||
<math display="block">H_{{n^2}} - H_n \sim \ln(n+1)</math> | |||
: | === Prime number theorem=== | ||
The [[prime number theorem]] provides the following asymptotic equivalence: | |||
<math display="block">\frac{n}{\pi(n)} \sim \ln n</math> | |||
where <math>\pi(n)</math> is the [[prime counting function]]. This relationship is equal to:<ref name="Deveci2022DoubleSeries"/>{{rp|2}} | |||
<math display="block">\frac{n}{H(1, 2, \ldots, x_n)} \sim \ln n</math> | |||
:<math>\frac{H_n}{H_{n^2}} \sim \frac{\ln \sqrt{n}}{\ln n} = \frac{1}{2}</math> | where <math>H(x_1, x_2, \ldots, x_n)</math> is the [[harmonic mean]] of <math>x_1, x_2, \ldots, x_n</math>. This is derived from the fact that the difference between the <math>n</math>th harmonic number and <math>\ln n</math> asymptotically approaches a [[Euler's constant|small constant]], resulting in <math>H_{n^2} - H_n \sim H_n</math>. This behavior can also be derived from the [[#Logarithm of a power|properties of logarithms]]: <math>\ln n</math> is half of <math>\ln n^2</math>, and this "first half" is the natural log of the root of <math>n^2</math>, which corresponds roughly to the first <math>\textstyle \frac{1}{n}</math>th of the sum <math>H_{n^2}</math>, or <math>H_n</math>. The asymptotic equivalence of the first <math>\textstyle \frac{1}{n}</math>th of <math>H_{n^2}</math> to the latter <math>\textstyle \frac{n-1}{n}</math>th of the series is expressed as follows: | ||
<math display="block">\frac{H_n}{H_{n^2}} \sim \frac{\ln \sqrt{n}}{\ln n} = \frac{1}{2}</math> | |||
which generalizes to: | which generalizes to: | ||
<math display="block">\frac{H_n}{H_{n^k}} \sim \frac{\ln \sqrt[k]{n}}{\ln n} = \frac{1}{k}</math> | |||
<math display="block">k H_n \sim H_{n^k}</math> | |||
and: | and: | ||
<math display="block">k H_n - H_n \sim (k - 1) \ln(n+1)</math> | |||
<math display="block">H_{{n^k}} - H_n \sim (k - 1) \ln(n+1)</math> | |||
<math display="block">k H_n - H_{{n^k}} \sim (k - 1) \gamma</math> | |||
for fixed <math>k</math>. The correspondence sets <math>H_n</math> as a [[ | for fixed <math>k</math>. The correspondence sets <math>H_n</math> as a [[Unit of measurement|unit magnitude]] that partitions <math>H_{n^k}</math> across powers, where each interval <math>\textstyle \frac{1}{n}</math> to <math>\textstyle \frac{1}{n^2}</math>, <math>\textstyle \frac{1}{n^2}</math> to <math>\textstyle \frac{1}{n^3}</math>, etc., corresponds to one <math>H_n</math> unit, illustrating that <math>H_{n^k}</math> forms a [[Harmonic series (mathematics)#Definition and divergence|divergent series]] as <math>k \to \infty</math>. | ||
=== Real Arguments === | === Real Arguments === | ||
These approximations extend to the real-valued domain through the [[ | These approximations extend to the real-valued domain through the [[Harmonic number#Alternative, asymptotic formulation|interpolated harmonic number]]. For example, where <math>x \in \mathbb{R}</math>: | ||
<math display="block">H_{{x^2}} - H_x \sim \ln x</math> | |||
=== | === Stirling numbers === | ||
The natural logarithm is asymptotically related to the harmonic numbers by the [[ | The natural logarithm is asymptotically related to the harmonic numbers by the [[Stirling number]]s<ref>{{cite arXiv | ||
| title = New series identities with Cauchy, Stirling, and harmonic numbers, and Laguerre polynomials | | title = New series identities with Cauchy, Stirling, and harmonic numbers, and Laguerre polynomials | ||
| author = Khristo N. Boyadzhiev | | author = Khristo N. Boyadzhiev | ||
| Line 684: | Line 702: | ||
| pages = 2, 6 | | pages = 2, 6 | ||
| class = math.NT | | class = math.NT | ||
}}</ref> and the [[ | }}</ref> and the [[Gregory coefficients]].<ref>{{cite book | ||
| last = Comtet | | last = Comtet | ||
| first = Louis | | first = Louis | ||
| Line 690: | Line 708: | ||
| publisher = Kluwer | | publisher = Kluwer | ||
| year = 1974 | | year = 1974 | ||
}}</ref> By representing <math>H_n</math> in terms of [[ | }}</ref> By representing <math>H_n</math> in terms of [[Stirling numbers of the first kind]], the harmonic number difference is alternatively expressed as follows, for fixed <math>k</math>: | ||
<math display="block">\frac{s(n^k+1, 2)}{(n^k)!} - \frac{s(n+1, 2)}{n!} \sim (k-1) \ln(n+1)</math> | |||
== See also == | == See also == | ||
| Line 713: | Line 731: | ||
* [http://www.mathwords.com/l/logarithm.htm Logarithm] in Mathwords | * [http://www.mathwords.com/l/logarithm.htm Logarithm] in Mathwords | ||
[[Category:Logarithms]] | [[Category:Logarithms]] | ||
[[Category:Mathematical identities]] | [[Category:Mathematical identities]] | ||
[[Category:Articles containing proofs]] | [[Category:Articles containing proofs]] | ||
Revision as of 04:35, 9 June 2025
Template:Short description In mathematics, many logarithmic identities exist. The following is a compilation of the notable of these, many of which are used for computational purposes.
Trivial identities
Trivial mathematical identities are relatively simple (for an experienced mathematician), though not necessarily unimportant. The trivial logarithmic identities are as follows:
| if | then | |
| if | then |
Explanations
By definition, we know that:
where and .
Setting , we can see that:
So, substituting these values into the formula, we see that:
which gets us the first property.
Setting , we can see that:
So, substituting these values into the formula, we see that:
which gets us the second property.
Cancelling exponentials
Logarithms and exponentials with the same base cancel each other. This is true because logarithms and exponentials are inverse operationsTemplate:Sndmuch like the same way multiplication and division are inverse operations, and addition and subtraction are inverse operations:[1]
Both of the above are derived from the following two equations that define a logarithm: (note that in this explanation, the variables of and may not be referring to the same number)
Looking at the equation , and substituting the value for of , we get the following equation:
which gets us the first equation.
Another more rough way to think about it is that , and that that "" is .
Looking at the equation , and substituting the value for of , we get the following equation:
which gets us the second equation.
Another more rough way to think about it is that , and that that something "" is .
Using simpler operations
Logarithms can be used to make calculations easier. For example, two numbers can be multiplied just by using a logarithm table and adding. These are often known as logarithmic properties, which are documented in the table below.[2] The first three operations below assume that Template:Math and/or Template:Math, so that Template:Math and Template:Math. Derivations also use the log definitions Template:Math and Template:Math.
| because | ||
| because | ||
| because | ||
| because | ||
| because | ||
| because |
Where , , and are positive real numbers and , and and are real numbers.
The laws result from canceling exponentials and the appropriate law of indices. Starting with the first law:
The law for powers exploits another of the laws of indices:
The law relating to quotients then follows:
Similarly, the root law is derived by rewriting the root as a reciprocal power:
Derivations of product, quotient, and power rules
These are the three main logarithm laws, rules, or principles,[3] from which the other properties listed above can be proven. Each of these logarithm properties correspond to their respective exponent law, and their derivations and proofs will hinge on those facts. There are multiple ways to derive or prove each logarithm lawTemplate:Sndthis is just one possible method.
Logarithm of a product
To state the logarithm of a product law formally:
Derivation:
Let , where , and let . We want to relate the expressions and . This can be done more easily by rewriting in terms of exponentials, whose properties we already know. Additionally, since we are going to refer to and quite often, we will give them some variable names to make working with them easier: Let , and let .
Rewriting these as exponentials, we see that
From here, we can relate (i.e. ) and (i.e. ) using exponent laws as
To recover the logarithms, we apply to both sides of the equality.
The right side may be simplified using one of the logarithm properties from before: we know that , giving
We now resubstitute the values for and into our equation, so our final expression is only in terms of , , and .
This completes the derivation.
Logarithm of a quotient
To state the logarithm of a quotient law formally:
Derivation:
Let , where , and let .
We want to relate the expressions and . This can be done more easily by rewriting in terms of exponentials, whose properties we already know. Additionally, since we are going to refer to and quite often, we will give them some variable names to make working with them easier: Let , and let .
Rewriting these as exponentials, we see that:
From here, we can relate (i.e. ) and (i.e. ) using exponent laws as
To recover the logarithms, we apply to both sides of the equality.
The right side may be simplified using one of the logarithm properties from before: we know that , giving
We now resubstitute the values for and into our equation, so our final expression is only in terms of , , and .
This completes the derivation.
Logarithm of a power
To state the logarithm of a power law formally:
Derivation:
Let , where , let , and let . For this derivation, we want to simplify the expression . To do this, we begin with the simpler expression . Since we will be using often, we will define it as a new variable: Let .
To more easily manipulate the expression, we rewrite it as an exponential. By definition, , so we have
Similar to the derivations above, we take advantage of another exponent law. In order to have in our final expression, we raise both sides of the equality to the power of :
where we used the exponent law .
To recover the logarithms, we apply to both sides of the equality.
The left side of the equality can be simplified using a logarithm law, which states that .
Substituting in the original value for , rearranging, and simplifying gives
This completes the derivation.
Changing the base
To state the change of base logarithm formula formally:
This identity is useful to evaluate logarithms on calculators. For instance, most calculators have buttons for ln and for log10, but not all calculators have buttons for the logarithm of an arbitrary base.
Proof and derivation
Let , where Let . Here, and are the two bases we will be using for the logarithms. They cannot be 1, because the logarithm function is not well defined for the base of 1.Script error: No such module "Unsubst". The number will be what the logarithm is evaluating, so it must be a positive number. Since we will be dealing with the term quite frequently, we define it as a new variable: Let .
To more easily manipulate the expression, it can be rewritten as an exponential.
Applying to both sides of the equality,
Now, using the logarithm of a power property, which states that ,
Isolating , we get the following:
Resubstituting back into the equation,
This completes the proof that .
This formula has several consequences:
where is any permutation of the subscripts Template:Math. For example
Summation and subtraction
The following summation and subtraction rule is especially useful in probability theory when one is dealing with a sum of log-probabilities:
| because | ||
| because |
Note that the subtraction identity is not defined if , since the logarithm of zero is not defined. Also note that, when programming, and may have to be switched on the right hand side of the equations if to avoid losing the "1 +" due to rounding errors. Many programming languages have a specific log1p(x) function that calculates without underflow (when is small).
More generally:
Exponents
A useful identity involving exponents:
or more universally:
Other or resulting identities
Inequalities
Based on,[4]
All are accurate around , but not for large numbers.
Tropical identities
The following identity relates log semiring to the min-plus semiring.
Calculus identities
Limits
The last limit is often summarized as "logarithms grow more slowly than any power or root of x".
Derivatives of logarithmic functions
Integral definition
To modify the limits of integration to run from to , we change the order of integration, which changes the sign of the integral:
Therefore:
Riemann Sum
for and is a sample point in each interval.
Series representation
The natural logarithm has a well-known Taylor series[5] expansion that converges for in the open-closed interval Template:Open-closed:
Within this interval, for , the series is conditionally convergent, and for all other values, it is absolutely convergent. For or , the series does not converge to . In these cases, different representations[6] or methods must be used to evaluate the logarithm.
Harmonic number difference
It is not uncommon in advanced mathematics, particularly in analytic number theory and asymptotic analysis, to encounter expressions involving differences or ratios of harmonic numbers at scaled indices.[7] The identity involving the limiting difference between harmonic numbers at scaled indices and its relationship to the logarithmic function provides an intriguing example of how discrete sequences can asymptotically relate to continuous functionss. This identity is expressed as[8]
which characterizes the behavior of harmonic numbers as they grow large. This approximation (which precisely equals in the limit) reflects how summation over increasing segments of the harmonic series exhibits integral properties, giving insight into the interplay between discrete and continuous analysis. It also illustrates how understanding the behavior of sums and series at large scales can lead to insightful conclusions about their properties. Here denotes the -th harmonic number, defined as
The harmonic numbers are a fundamental sequence in number theory and analysis, known for their logarithmic growth. This result leverages the fact that the sum of the inverses of integers (i.e., harmonic numbers) can be closely approximated by the natural logarithm function, plus a constant, especially when extended over large intervals.[9][7][10] As tends towards infinity, the difference between the harmonic numbers and converges to a non-zero value. This persistent non-zero difference, , precludes the possibility of the harmonic series approaching a finite limit, thus providing a clear mathematical articulation of its divergence.[11][12] The technique of approximating sums by integrals (specifically using the integral test or by direct integral approximation) is fundamental in deriving such results. This specific identity can be a consequence of these approximations, considering:
Harmonic limit derivation
The limit explores the growth of the harmonic numbers when indices are multiplied by a scaling factor and then differenced. It specifically captures the sum from to :
This can be estimated using the integral test for convergence, or more directly by comparing it to the integral of from to :
As the window's lower bound begins at and the upper bound extends to , both of which tend toward infinity as , the summation window encompasses an increasingly vast portion of the smallest possible terms of the harmonic series (those with astronomically large denominators), creating a discrete sum that stretches towards infinity, which mirrors how continuous integrals accumulate value across an infinitesimally fine partitioning of the domain. In the limit, the interval is effectively from to where the onset implies this minimally discrete region.
Double series formula
The harmonic number difference formula for is an extension[8] of the classic, alternating identity of :
which can be generalized as the double series over the residues of :
where is the principle ideal generated by . Subtracting from each term (i.e., balancing each term with the modulus) reduces the magnitude of each term's contribution, ensuring convergence by controlling the series' tendency toward divergence as increases. For example:
This method leverages the fine differences between closely related terms to stabilize the series. The sum over all residues ensures that adjustments are uniformly applied across all possible offsets within each block of terms. This uniform distribution of the "correction" across different intervals defined by functions similarly to telescoping over a very large sequence. It helps to flatten out the discrepancies that might otherwise lead to divergent behavior in a straightforward harmonic series. Note that the structure of the summands of this formula matches those of the interpolated harmonic number when both the domain and range are negated (i.e., ). However, the interpretation and roles of the variables differ.
Deveci's Proof
A fundamental feature of the proof is the accumulation of the subtrahends into a unit fraction, that is, for , thus rather than , where the extrema of are if and otherwise, with the minimum of being implicit in the latter case due to the structural requirements of the proof. Since the cardinality of depends on the selection of one of two possible minima, the integral , as a set-theoretic procedure, is a function of the maximum (which remains consistent across both interpretations) plus , not the cardinality (which is ambiguous[13][14] due to varying definitions of the minimum). Whereas the harmonic number difference computes the integral in a global sliding window, the double series, in parallel, computes the sum in a local sliding window—a shifting -tuple—over the harmonic series, advancing the window by positions to select the next -tuple, and offsetting each element of each tuple by relative to the window's absolute position. The sum corresponds to which scales without bound. The sum corresponds to the prefix trimmed from the series to establish the window's moving lower bound , and is the limit of the sliding window (the scaled, truncated[15] series):
Integrals of logarithmic functions
To remember higher integrals, it is convenient to define:
where is the nth harmonic number:
Then:
Approximating large numbers
The identities of logarithms can be used to approximate large numbers. Note that Template:Math, where a, b, and c are arbitrary constants. Suppose that one wants to approximate the 44th Mersenne prime, Template:Math. To get the base-10 logarithm, we would multiply 32,582,657 by Template:Math, getting Template:Math. We can then get Template:Math.
Similarly, factorials can be approximated by summing the logarithms of the terms.
Complex logarithm identities
The complex logarithm is the complex number analogue of the logarithm function. No single valued function on the complex plane can satisfy the normal rules for logarithms. However, a multivalued function can be defined which satisfies most of the identities. It is usual to consider this as a function defined on a Riemann surface. A single valued version, called the principal value of the logarithm, can be defined which is discontinuous on the negative x axis, and is equal to the multivalued version on a single branch cut.
Definitions
In what follows, a capital first letter is used for the principal value of functions, and the lower case version is used for the multivalued function. The single valued version of definitions and identities is always given first, followed by a separate section for the multiple valued versions.
- Template:Math is the standard natural logarithm of the real number Template:Mvar.
- Template:Math is the principal value of the arg function; its value is restricted to Template:Open-closed. It can be computed using Template:Math.
- Template:Math is the principal value of the complex logarithm function and has imaginary part in the range Template:Open-closed.
The multiple valued version of Template:Math is a set, but it is easier to write it without braces and using it in formulas follows obvious rules.
- Template:Math is the set of complex numbers v which satisfy Template:Math
- Template:Math is the set of possible values of the arg function applied to z.
When k is any integer:
Constants
Principal value forms:
Multiple value forms, for any k an integer:
Summation
Principal value forms:[16]
Multiple value forms:
Powers
A complex power of a complex number can have many possible values.
Principal value form:
Multiple value forms:
Where Template:Math, Template:Math are any integers:
Asymptotic identities
Pronic numbers
As a consequence of the harmonic number difference, the natural logarithm is asymptotically approximated by a finite series difference,[8] representing a truncation of the integral at :
where is the Template:Mvarth triangular number, and is the sum of the [[Pronic number#As figurate numbers|first Template:Mvar even integers]]. Since the Template:Mvarth pronic number is asymptotically equivalent to the Template:Mvarth perfect square, it follows that:
Prime number theorem
The prime number theorem provides the following asymptotic equivalence:
where is the prime counting function. This relationship is equal to:[8]Template:Rp
where is the harmonic mean of . This is derived from the fact that the difference between the th harmonic number and asymptotically approaches a small constant, resulting in . This behavior can also be derived from the properties of logarithms: is half of , and this "first half" is the natural log of the root of , which corresponds roughly to the first th of the sum , or . The asymptotic equivalence of the first th of to the latter th of the series is expressed as follows:
which generalizes to:
and:
for fixed . The correspondence sets as a unit magnitude that partitions across powers, where each interval to , to , etc., corresponds to one unit, illustrating that forms a divergent series as .
Real Arguments
These approximations extend to the real-valued domain through the interpolated harmonic number. For example, where :
Stirling numbers
The natural logarithm is asymptotically related to the harmonic numbers by the Stirling numbers[17] and the Gregory coefficients.[18] By representing in terms of Stirling numbers of the first kind, the harmonic number difference is alternatively expressed as follows, for fixed :
See also
- Template:Annotated link
- Template:Annotated link
- Template:Annotated link
- Template:Annotated link
- Template:Annotated link
References
External links
- Template:Sister-inline
- Logarithm in Mathwords
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ To extend the utility of the Mercator series beyond its conventional bounds one can calculate for and and then negate the result, , to derive . For example, setting yields .
- ↑ a b Script error: No such module "citation/CS1". See page 117, and VI.8 definition of shifted harmonic numbers on page 389
- ↑ a b c d Script error: No such module "citation/CS1". See Theorem 5.2. on pages 22 - 23
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1". See formula 13.
- ↑ Template:Cite report See Proofs 23 and 24 for details on the relationship between harmonic numbers and logarithmic functions.
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "citation/CS1". A synopsis on the nature of 0 which frames the choice of minimum as the dichotomy between ordinals and cardinals.
- ↑ Script error: No such module "Citation/CS1". See section 3.1
- ↑ The shift is characteristic of the right Riemann sum employed to prevent the integral from degenerating into the harmonic series, thereby averting divergence. Here, functions analogously, serving to regulate the series. The successor operation signals the implicit inclusion of the modulus (the region omitted from ). The importance of this, from an axiomatic perspective, becomes evident when the residues of are formulated as , where is bootstrapped by to produce the residues of modulus . Consequently, represents a limiting value in this context.
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".