Language model

From Wikipedia, the free encyclopedia
(Redirected from Language modeling)
Jump to navigation Jump to search

Template:Short description Template:Use dmy dates

A language model is a model of the human brain's ability to produce natural language.[1][2] Language models are useful for a variety of tasks, including speech recognition,[3] machine translation,[4] natural language generation (generating more human-like text), optical character recognition, route optimization,[5] handwriting recognition,[6] grammar induction,[7] and information retrieval.[8][9]

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using texts scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

History

Noam Chomsky did pioneering work on language models in the 1950s by developing a theory of formal grammars.[10]

In 1980, statistical approaches were explored and found to be more useful for many purposes than rule-based formal grammars. Discrete representations like word n-gram language models, with probabilities for discrete combinations of words, made significant advances.

In the 2000s, continuous representations for words, such as word embeddings, began to replace discrete representations.[11] Typically, the representation is a real-valued vector that encodes the meaning of the word in such a way that the words that are closer in the vector space are expected to be similar in meaning, and common relationships between pairs of words like plurality or gender.

Pure statistical models

In 1980, the first significant statistical language model was proposed, and during the decade IBM performed ‘Shannon-style’ experiments, in which potential sources for language modeling improvement were identified by observing and analyzing the performance of human subjects in predicting or correcting text.[12]

Models based on word n-grams

Template:Excerpt

Exponential

Maximum entropy language models encode the relationship between a word and the n-gram history using feature functions. The equation is

P(wmw1,,wm1)=1Z(w1,,wm1)exp(aTf(w1,,wm))

where Z(w1,,wm1) is the partition function, a is the parameter vector, and f(w1,,wm) is the feature function. In the simplest case, the feature function is just an indicator of the presence of a certain n-gram. It is helpful to use a prior on a or some form of regularization.

The log-bilinear model is another example of an exponential language model.

Skip-gram model

Template:Excerpt

Neural models

Recurrent neural network

Continuous representations or embeddings of words are produced in recurrent neural network-based language models (known also as continuous space language models).[13] Such continuous space embeddings help to alleviate the curse of dimensionality, which is the consequence of the number of possible sequences of words increasing exponentially with the size of the vocabulary, further causing a data sparsity problem. Neural networks avoid this problem by representing words as non-linear combinations of weights in a neural net.[14]

Large language models

Template:Excerpt

Although sometimes matching human performance, it is not clear whether they are plausible cognitive models. At least for recurrent neural networks, it has been shown that they sometimes learn patterns that humans do not, but fail to learn patterns that humans typically do.[15]

Evaluation and benchmarks

Evaluation of the quality of language models is mostly done by comparison to human created sample benchmarks created from typical language-oriented tasks. Other, less established, quality tests examine the intrinsic character of a language model or compare two such models. Since language models are typically intended to be dynamic and to learn from data they see, some proposed models investigate the rate of learning, e.g., through inspection of learning curves.[16]

Various data sets have been developed for use in evaluating language processing systems.[17] These include:

  • Massive Multitask Language Understanding (MMLU)[18]
  • Corpus of Linguistic Acceptability[19]
  • GLUE benchmark[20]
  • Microsoft Research Paraphrase Corpus[21]
  • Multi-Genre Natural Language Inference
  • Question Natural Language Inference
  • Quora Question Pairs[22]
  • Recognizing Textual Entailment[23]
  • Semantic Textual Similarity Benchmark
  • SQuAD question answering Test[24]
  • Stanford Sentiment Treebank[25]
  • Winograd NLI
  • BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs[26]

See also

Template:Div col

Template:Div col end

References

Template:Reflist

Further reading

Template:Refbegin

  • Script error: No such module "citation/CS1".
  • Script error: No such module "citation/CS1".
  • Script error: No such module "citation/CS1".

Template:Refend Template:Natural language processing Template:Artificial intelligence navbox

  1. Script error: No such module "Citation/CS1"."LLMs are supposed to model how utterances behave."
  2. Script error: No such module "citation/CS1".
  3. Kuhn, Roland, and Renato De Mori (1990). "A cache-based natural language model for speech recognition". IEEE transactions on pattern analysis and machine intelligence 12.6: 570–583.
  4. Andreas, Jacob, Andreas Vlachos, and Stephen Clark (2013). "Semantic parsing as machine translation" Template:Webarchive. Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers).
  5. Script error: No such module "Citation/CS1".
  6. Pham, Vu, et al (2014). "Dropout improves recurrent neural networks for handwriting recognition" Template:Webarchive. 14th International Conference on Frontiers in Handwriting Recognition. IEEE.
  7. Htut, Phu Mon, Kyunghyun Cho, and Samuel R. Bowman (2018). "Grammar induction with neural language models: An unusual replication" Template:Webarchive. arXiv:1808.10000File:Lock-green.svg .
  8. Script error: No such module "citation/CS1".
  9. Script error: No such module "citation/CS1".
  10. Script error: No such module "Citation/CS1".
  11. Script error: No such module "citation/CS1".
  12. Script error: No such module "Citation/CS1".
  13. Script error: No such module "citation/CS1".
  14. Script error: No such module "citation/CS1".
  15. Script error: No such module "citation/CS1".
  16. Script error: No such module "citation/CS1".
  17. Script error: No such module "citation/CS1".
  18. Script error: No such module "citation/CS1".
  19. Script error: No such module "citation/CS1".
  20. Script error: No such module "citation/CS1".
  21. Script error: No such module "citation/CS1".
  22. Script error: No such module "citation/CS1".
  23. Script error: No such module "citation/CS1".
  24. Script error: No such module "citation/CS1".
  25. Script error: No such module "citation/CS1".
  26. Script error: No such module "citation/CS1".