Iterated logarithm: Difference between revisions
imported>I am Being Here to Help You m →See also: case |
imported>OAbot m Open access bot: hdl updated in citation with #oabot. |
||
| Line 24: | Line 24: | ||
| year = 2019| page = 159 | | year = 2019| page = 159 | ||
| doi-access = free | | doi-access = free | ||
| hdl = 2115/75613 | |||
| hdl-access = free | |||
}}</ref> | }}</ref> | ||
Latest revision as of 06:15, 19 June 2025
Template:Short description Script error: No such module "For".
In computer science, the iterated logarithm of , written Template:Log-star (usually read "log star"), is the number of times the logarithm function must be iteratively applied before the result is less than or equal to .[1] The simplest formal definition is the result of this recurrence relation:
In computer science, Template:Lg-star is often used to indicate the binary iterated logarithm, which iterates the binary logarithm (with base ) instead of the natural logarithm (with base e). Mathematically, the iterated logarithm is well defined for any base greater than , not only for base and base e. The "super-logarithm" function is "essentially equivalent" to the base iterated logarithm (although differing in minor details of rounding) and forms an inverse to the operation of tetration.[2]
Analysis of algorithms
The iterated logarithm is useful in analysis of algorithms and computational complexity, appearing in the time and space complexity bounds of some algorithms such as:
- Finding the Delaunay triangulation of a set of points knowing the Euclidean minimum spanning tree: randomized O(n Template:Log-star n) time.[3]
- Fürer's algorithm for integer multiplication: O(n log n 2O(Template:Log-star n)).
- Finding an approximate maximum (element at least as large as the median): Template:Log-star n − 1 ± 3 parallel operations.[4]
- Richard Cole and Uzi Vishkin's distributed algorithm for 3-coloring an n-cycle: O(Template:Log-star n) synchronous communication rounds.[5]
The iterated logarithm grows at an extremely slow rate, much slower than the logarithm itself, or repeats of it. This is because the tetration grows much faster than iterated exponential:
the inverse grows much slower: .
For all values of n relevant to counting the running times of algorithms implemented in practice (i.e., n ≤ 265536, which is far more than the estimated number of atoms in the known universe), the iterated logarithm with base 2 has a value no more than 5.
| x | Template:Lg-star x |
|---|---|
| Template:Open-closed | 0 |
| Template:Open-closed | 1 |
| Template:Open-closed | 2 |
| Template:Open-closed | 3 |
| Template:Open-closed | 4 |
| Template:Open-closed | 5 |
Higher bases give smaller iterated logarithms.
Other applications
The iterated logarithm is closely related to the generalized logarithm function used in symmetric level-index arithmetic. The additive persistence of a number, the number of times someone must replace the number by the sum of its digits before reaching its digital root, is .
In computational complexity theory, Santhanam[6] shows that the computational resources DTIME — computation time for a deterministic Turing machine — and NTIME — computation time for a non-deterministic Turing machine — are distinct up to
See also
- Inverse Ackermann function, an even more slowly growing function also used in computational complexity theory