Computational learning theory: Difference between revisions

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
imported>LooksGreatInATurtleNeck
There was a Script warning on the page from a cite book template, "Category:CS1 maint: date and year", fixed by removing the redundant year= field as date= was already set
 
imported>Jochen Burghardt
Undid revision 1319075991 by 197.2.168.204 (talk): please discuss on talk page, not in the article
 
Line 6: Line 6:


==Overview==
==Overview==
Theoretical results in machine learning mainly deal with a type of inductive learning called [[supervised learning]]. In supervised learning, an algorithm is given samples that are [[Labeled data|labeled]] in some useful way. For example, the samples might be descriptions of mushrooms, and the labels could be whether or not the mushrooms are edible. The algorithm takes these previously labeled samples and uses them to induce a classifier. This classifier is a function that assigns labels to samples, including samples that have not been seen previously by the algorithm. The goal of the supervised learning algorithm is to optimize some measure of performance such as minimizing the number of mistakes made on new samples.
Theoretical results in machine learning often focus on a type of inductive learning known as [[supervised learning]]. In supervised learning, an algorithm is provided with [[Labeled data|labeled]] samples. For instance, the samples might be descriptions of mushrooms, with labels indicating whether they are edible or not. The algorithm uses these labeled samples to create a classifier. This classifier assigns labels to new samples, including those it has not previously encountered. The goal of the supervised learning algorithm is to optimize performance metrics, such as minimizing errors on new samples.


In addition to performance bounds, computational learning theory studies the time complexity and feasibility of learning.{{citation needed|date=October 2017}} In
In addition to performance bounds, computational learning theory studies the time complexity and feasibility of learning.{{citation needed|date=October 2017}} In
Line 22: Line 22:
There are several different approaches to computational learning theory based on making different assumptions about the [[inference]] principles used to generalise from limited data. This includes different definitions of [[probability]] (see [[frequency probability]], [[Bayesian probability]]) and different assumptions on the generation of samples.{{citation needed|date=October 2017}} The different approaches include:
There are several different approaches to computational learning theory based on making different assumptions about the [[inference]] principles used to generalise from limited data. This includes different definitions of [[probability]] (see [[frequency probability]], [[Bayesian probability]]) and different assumptions on the generation of samples.{{citation needed|date=October 2017}} The different approaches include:


* Exact learning, proposed by [[Dana Angluin]]{{citation needed|date=October 2017}};
* Exact learning, proposed by [[Dana Angluin]];<ref>{{cite thesis | type=Ph.D. thesis | author=Dana Angluin | title=An Application of the Theory of Computational Complexity to the Study of Inductive Inference | institution=University of California at Berkeley | year=1976 }}</ref><ref>{{cite journal | url=http://www.sciencedirect.com/science/article/pii/S0019995878906836 | author=D. Angluin | title=On the Complexity of Minimum Inference of Regular Sets | journal=Information and Control | volume=39 | number=3 | pages=337&ndash;350 | year=1978 }}</ref>
* [[Probably approximately correct learning]] (PAC learning), proposed by [[Leslie Valiant]];<ref>{{cite journal |last1=Valiant |first1=Leslie |title=A Theory of the Learnable |journal=Communications of the ACM |date=1984 |volume=27 |issue=11 |pages=1134–1142 |doi=10.1145/1968.1972 |s2cid=12837541 |url=https://www.montefiore.ulg.ac.be/~geurts/Cours/AML/Readings/Valiant.pdf |ref=ValTotL |access-date=2022-11-24 |archive-date=2019-05-17 |archive-url=https://web.archive.org/web/20190517235548/http://www.montefiore.ulg.ac.be/~geurts/Cours/AML/Readings/Valiant.pdf |url-status=dead }}</ref>
* [[Probably approximately correct learning]] (PAC learning), proposed by [[Leslie Valiant]];<ref>{{cite journal |last1=Valiant |first1=Leslie |title=A Theory of the Learnable |journal=Communications of the ACM |date=1984 |volume=27 |issue=11 |pages=1134–1142 |doi=10.1145/1968.1972 |s2cid=12837541 |url=https://www.montefiore.ulg.ac.be/~geurts/Cours/AML/Readings/Valiant.pdf |ref=ValTotL |access-date=2022-11-24 |archive-date=2019-05-17 |archive-url=https://web.archive.org/web/20190517235548/http://www.montefiore.ulg.ac.be/~geurts/Cours/AML/Readings/Valiant.pdf |url-status=dead }}</ref>
* [[VC theory]], proposed by [[Vladimir Vapnik]] and [[Alexey Chervonenkis]];<ref>{{cite journal |last1=Vapnik |first1=V. |last2=Chervonenkis |first2=A. |title=On the uniform convergence of relative frequencies of events to their probabilities |journal=Theory of Probability and Its Applications |date=1971 |volume=16 |issue=2 |pages=264–280 |doi=10.1137/1116025 |url=https://courses.engr.illinois.edu/ece544na/fa2014/vapnik71.pdf |ref=VCdim}}</ref>
* [[VC theory]], proposed by [[Vladimir Vapnik]] and [[Alexey Chervonenkis]];<ref>{{cite journal |last1=Vapnik |first1=V. |last2=Chervonenkis |first2=A. |title=On the uniform convergence of relative frequencies of events to their probabilities |journal=Theory of Probability and Its Applications |date=1971 |volume=16 |issue=2 |pages=264–280 |doi=10.1137/1116025 |url=https://courses.engr.illinois.edu/ece544na/fa2014/vapnik71.pdf |ref=VCdim}}</ref>

Latest revision as of 21:16, 28 October 2025

Script error: No such module "Labelled list hatnote". Template:Short descriptionScript error: No such module "Unsubst". Template:Machine learning

In computer science, computational learning theory (or just learning theory) is a subfield of artificial intelligence devoted to studying the design and analysis of machine learning algorithms.[1]

Overview

Theoretical results in machine learning often focus on a type of inductive learning known as supervised learning. In supervised learning, an algorithm is provided with labeled samples. For instance, the samples might be descriptions of mushrooms, with labels indicating whether they are edible or not. The algorithm uses these labeled samples to create a classifier. This classifier assigns labels to new samples, including those it has not previously encountered. The goal of the supervised learning algorithm is to optimize performance metrics, such as minimizing errors on new samples.

In addition to performance bounds, computational learning theory studies the time complexity and feasibility of learning.Script error: No such module "Unsubst". In computational learning theory, a computation is considered feasible if it can be done in polynomial time.Script error: No such module "Unsubst". There are two kinds of time complexity results:

Negative results often rely on commonly believed, but yet unproven assumptions,Script error: No such module "Unsubst". such as:

There are several different approaches to computational learning theory based on making different assumptions about the inference principles used to generalise from limited data. This includes different definitions of probability (see frequency probability, Bayesian probability) and different assumptions on the generation of samples.Script error: No such module "Unsubst". The different approaches include:

While its primary goal is to understand learning abstractly, computational learning theory has led to the development of practical algorithms. For example, PAC theory inspired boosting, VC theory led to support vector machines, and Bayesian inference led to belief networks.

See also

References

<templatestyles src="Reflist/styles.css" />

  1. Script error: No such module "citation/CS1".
  2. Script error: No such module "citation/CS1".
  3. Template:Cite thesis
  4. Script error: No such module "Citation/CS1".
  5. Script error: No such module "Citation/CS1".
  6. Script error: No such module "Citation/CS1".
  7. Script error: No such module "Citation/CS1".
  8. Script error: No such module "Citation/CS1".
  9. Script error: No such module "Citation/CS1".

Script error: No such module "Check for unknown parameters".

Further reading

A description of some of these publications is given at important publications in machine learning.

Surveys

  • Angluin, D. 1992. Computational learning theory: Survey and selected bibliography. In Proceedings of the Twenty-Fourth Annual ACM Symposium on Theory of Computing (May 1992), pages 351–369. http://portal.acm.org/citation.cfm?id=129712.129746
  • D. Haussler. Probably approximately correct learning. In AAAI-90 Proceedings of the Eight National Conference on Artificial Intelligence, Boston, MA, pages 1101–1108. American Association for Artificial Intelligence, 1990. http://citeseer.ist.psu.edu/haussler90probably.html

Feature selection

Optimal O notation learning

Negative results

Error tolerance

Equivalence

  • D.Haussler, M.Kearns, N.Littlestone and M. Warmuth, Equivalence of models for polynomial learnability, Proc. 1st ACM Workshop on Computational Learning Theory, (1988) 42-55.
  • Script error: No such module "Citation/CS1".

External links

Template:Differentiable computing

de:Maschinelles Lernen