ELIZA effect: Difference between revisions
imported>Belbury →top: add screenshot |
imported>TucanHolmes →See also: Add Cold reading to list |
||
| Line 1: | Line 1: | ||
{{Short description|Cognitive bias in which computers are anthropomorphised}} | {{Short description|Cognitive bias in which computers are anthropomorphised}} | ||
[[File:Video Game Museum in Berlin (44129332940).jpg|thumb|A conversation with ELIZA]] | [[File:Video Game Museum in Berlin (44129332940).jpg|thumb|A conversation with ELIZA]] | ||
In [[computer science]], the '''ELIZA effect''' is a tendency to project human traits — such as experience, [[semantics|semantic]] comprehension or [[empathy]] — onto rudimentary computer programs having a textual interface. [[ELIZA]] was a [[symbolic AI]] [[chatbot]] developed in 1966 by [[Joseph Weizenbaum]] | In [[computer science]], the '''ELIZA effect''' is a tendency to project human traits — such as experience, [[semantics|semantic]] comprehension or [[empathy]] — onto rudimentary computer programs having a textual interface. [[ELIZA]] was a [[symbolic AI]] [[chatbot]] developed in 1966 by [[Joseph Weizenbaum]] that imitated a psychotherapist. Many early users were convinced of ELIZA's intelligence and understanding, despite its basic text-processing approach and the explanations of its limitations. | ||
== History == | == History == | ||
The effect is named for [[ELIZA]], the 1966 [[chatbot]] developed by MIT computer scientist [[Joseph Weizenbaum]]. When executing Weizenbaum's ''DOCTOR'' [[scripting language|script]], ELIZA simulated a [[Rogerian psychotherapy|Rogerian]] [[psychotherapist]], largely by rephrasing the "patient{{"'}}s replies as questions:<ref name="Güzeldere1">{{cite web |title=dialogues with colorful personalities of early ai |last=Güzeldere |first=Güven |author2=Franchi, Stefano |url=http://www.stanford.edu/group/SHR/4-2/text/dialogues.html |access-date=2007-07-30 |archive-url=https://web.archive.org/web/20110425191843/http://www.stanford.edu/group/SHR/4-2/text/dialogues.html |archive-date=2011-04-25 |url-status=dead }}</ref> | The effect is named for [[ELIZA]], the 1966 [[chatbot]] developed by MIT computer scientist [[Joseph Weizenbaum]].<ref name="Berry-2023">{{Cite journal |last=Berry |first=David |date=2023 |title=The Limits of Computation: Joseph Weizenbaum and the ELIZA Chatbot |url=https://ojs.weizenbaum-institut.de/index.php/wjds/article/view/106 |journal=Weizenbaum Journal of the Digital Society |volume=3 |issue=3 |doi=10.34669/WI.WJDS/3.3.2 |issn=2748-5625}}</ref> When executing Weizenbaum's ''DOCTOR'' [[scripting language|script]], ELIZA simulated a [[Rogerian psychotherapy|Rogerian]] [[psychotherapist]], largely by rephrasing the "patient{{"'}}s replies as questions:<ref name="Güzeldere1">{{cite web |title=dialogues with colorful personalities of early ai |last=Güzeldere |first=Güven |author2=Franchi, Stefano |url=http://www.stanford.edu/group/SHR/4-2/text/dialogues.html |access-date=2007-07-30 |archive-url=https://web.archive.org/web/20110425191843/http://www.stanford.edu/group/SHR/4-2/text/dialogues.html |archive-date=2011-04-25 |url-status=dead }}</ref> | ||
: Human | :{{dialogue | ||
|Human|Well, my boyfriend made me come here. | |||
|ELIZA|Your boyfriend made you come here? | |||
|Human|He says I'm depressed much of the time. | |||
|ELIZA|I am sorry to hear you are depressed. | |||
|Human|It's true. I'm unhappy. | |||
|ELIZA|Do you think coming here will help you not to be unhappy?}} | |||
Though designed strictly as a mechanism to support "natural language conversation" with a computer,<ref name="Weizenbaum2">{{cite journal|first=Joseph|last=Weizenbaum|title=ELIZA--A Computer Program For the Study of Natural Language Communication Between Man and Machine|journal=Communications of the ACM|publisher=[[Massachusetts Institute of Technology]]|volume=9|date=January 1966|access-date=2008-06-17|url=http://www.csee.umbc.edu/courses/331/papers/eliza.html|doi=10.1145/365153.365168|page=36|s2cid=1896290|doi-access=free}}</ref> ELIZA's ''DOCTOR'' script was found to be surprisingly successful in eliciting emotional responses from users who, in the course of interacting with the program, began to ascribe understanding and motivation to the program's output.<ref name="Suchman1">{{cite book|first=Lucy A.|last=Suchman|title=Plans and Situated Actions: The problem of human-machine communication|publisher=Cambridge University Press|year=1987|isbn=978-0-521-33739-7|page=24|access-date=2008-06-17|url=https://books.google.com/books?id=AJ_eBJtHxmsC&q=Suchman+Plans+and+Situated+Actions}}</ref> As Weizenbaum later wrote, "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."<ref>{{Cite book |last=Weizenbaum |first=Joseph |title=Computer Power and Human Reason: From Judgement to Calculation |date=1976 |publisher=W. H. Freeman |isbn=978-0716704645 |page=7}}</ref> Indeed, ELIZA's code had not been designed to evoke this reaction in the first place. Upon observation, researchers discovered users unconsciously assuming ELIZA's questions implied interest and emotional involvement in the topics discussed, even when they consciously knew that ELIZA did not simulate emotion.<ref name="Billings1">{{cite news |last=Billings |first=Lee |date=2007-07-16 |title=Rise of Roboethics |url=http://www.seedmagazine.com/news/2007/07/rise_of_roboethics.php |url-status=dead |archive-url=https://web.archive.org/web/20090228092414/http://www.seedmagazine.com/news/2007/07/rise_of_roboethics.php |archive-date=2009-02-28 |publisher=[[Seed (magazine)|Seed]] |quote=(Joseph) Weizenbaum had unexpectedly discovered that, even if fully aware that they are talking to a simple computer program, people will nonetheless treat it as if it were a real, thinking being that cared about their problems – a phenomenon now known as the 'Eliza Effect'.}}</ref> | Though designed strictly as a mechanism to support "natural language conversation" with a computer,<ref name="Weizenbaum2">{{cite journal|first=Joseph|last=Weizenbaum|title=ELIZA--A Computer Program For the Study of Natural Language Communication Between Man and Machine|journal=Communications of the ACM|publisher=[[Massachusetts Institute of Technology]]|volume=9|date=January 1966|access-date=2008-06-17|url=http://www.csee.umbc.edu/courses/331/papers/eliza.html|doi=10.1145/365153.365168|page=36|s2cid=1896290|doi-access=free}}</ref> ELIZA's ''DOCTOR'' script was found to be surprisingly successful in eliciting emotional responses from users who, in the course of interacting with the program, began to ascribe understanding and motivation to the program's output.<ref name="Suchman1">{{cite book|first=Lucy A.|last=Suchman|title=Plans and Situated Actions: The problem of human-machine communication|publisher=Cambridge University Press|year=1987|isbn=978-0-521-33739-7|page=24|access-date=2008-06-17|url=https://books.google.com/books?id=AJ_eBJtHxmsC&q=Suchman+Plans+and+Situated+Actions}}</ref> As Weizenbaum later wrote, "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."<ref>{{Cite book |last=Weizenbaum |first=Joseph |title=Computer Power and Human Reason: From Judgement to Calculation |date=1976 |publisher=W. H. Freeman |isbn=978-0716704645 |page=7}}</ref> Indeed, ELIZA's code had not been designed to evoke this reaction in the first place. Upon observation, researchers discovered users unconsciously assuming ELIZA's questions implied interest and emotional involvement in the topics discussed, even when they consciously knew that ELIZA did not simulate emotion.<ref name="Billings1">{{cite news |last=Billings |first=Lee |date=2007-07-16 |title=Rise of Roboethics |url=http://www.seedmagazine.com/news/2007/07/rise_of_roboethics.php |url-status=dead |archive-url=https://web.archive.org/web/20090228092414/http://www.seedmagazine.com/news/2007/07/rise_of_roboethics.php |archive-date=2009-02-28 |publisher=[[Seed (magazine)|Seed]] |quote=(Joseph) Weizenbaum had unexpectedly discovered that, even if fully aware that they are talking to a simple computer program, people will nonetheless treat it as if it were a real, thinking being that cared about their problems – a phenomenon now known as the 'Eliza Effect'.}}</ref> | ||
In the 18th century, the tendency to understand mechanical operations in psychological terms was already noted by [[Charles Babbage]]. In proposing what would later be called a [[carry-lookahead adder]], Babbage remarked that he found such terms convenient for descriptive purposes, even though nothing more than mechanical action was meant.<ref>{{cite journal|last=Green|first=Christopher D.|author-link=Christopher D. Green|title=Was Babbage's Analytical Engine an Instrument of Psychological Research?|journal=History of Psychology|volume=8|number=1|pages=35–45|date=February 2005|doi=10.1037/1093-4510.8.1.35 |pmid=16021763 }}</ref> | |||
== Characteristics == | == Characteristics == | ||
| Line 28: | Line 29: | ||
The discovery of the ELIZA effect was an important development in [[artificial intelligence]], demonstrating the principle of using [[Social engineering (security)|social engineering]] rather than explicit programming to pass a [[Turing test]].<ref name="Trappl2002">{{cite book|title=Emotions in Humans and Artifacts|last1=Trappl|first1=Robert|last2=Petta|first2=Paolo|last3=Payr|first3=Sabine|page=353|year=2002|isbn=978-0-262-20142-1|quote=The "Eliza effect" — the tendency for people to treat programs that respond to them as if they had more intelligence than they really do (Weizenbaum 1966) is one of the most powerful tools available to the creators of virtual characters.|url=https://books.google.com/books?id=jTgMIhy6YZMC&pg=PA353|publisher=MIT Press|location=Cambridge, Mass.}}</ref> | The discovery of the ELIZA effect was an important development in [[artificial intelligence]], demonstrating the principle of using [[Social engineering (security)|social engineering]] rather than explicit programming to pass a [[Turing test]].<ref name="Trappl2002">{{cite book|title=Emotions in Humans and Artifacts|last1=Trappl|first1=Robert|last2=Petta|first2=Paolo|last3=Payr|first3=Sabine|page=353|year=2002|isbn=978-0-262-20142-1|quote=The "Eliza effect" — the tendency for people to treat programs that respond to them as if they had more intelligence than they really do (Weizenbaum 1966) is one of the most powerful tools available to the creators of virtual characters.|url=https://books.google.com/books?id=jTgMIhy6YZMC&pg=PA353|publisher=MIT Press|location=Cambridge, Mass.}}</ref> | ||
ELIZA convinced some users into thinking that a machine was human. This shift in human-machine interaction marked progress in technologies emulating human behavior. Two groups of chatbots are distinguished by William Meisel as "general [[personal assistant]]s" and "specialized digital assistants".<ref name=" | ELIZA convinced some users into thinking that a machine was human. This shift in human-machine interaction marked progress in technologies emulating human behavior. Two groups of chatbots are distinguished by William Meisel as "general [[personal assistant]]s" and "specialized digital assistants".<ref name="Dale-2016">{{Cite journal|last=Dale|first=Robert|date=September 2016|title=The return of the chatbots|journal=Natural Language Engineering|language=en|volume=22|issue=5|pages=811–817|doi=10.1017/S1351324916000243|issn=1351-3249|doi-access=free}}</ref> General digital assistants have been integrated into personal devices, with skills like sending messages, taking notes, checking calendars, and setting appointments. Specialized digital assistants "operate in very specific domains or help with very specific tasks".<ref name="Dale-2016" /> Weizenbaum considered that not every part of the human thought could be reduced to logical formalisms and that "there are some acts of thought that ought to be attempted only by humans".<ref>{{Cite book |last=Weizenbaum |first=Joseph |title=Computer power and human reason : from judgment to calculation |date=1976 |publisher=W. H. Freeman and Company |isbn=0-7167-0464-1 |location=San Francisco, Cal. |oclc=1527521}}</ref> | ||
== See also == | == See also == | ||
{{Portal|Philosophy|Psychology}} | {{Portal|Philosophy|Psychology}} | ||
* [[Chatbot psychosis]] | |||
* [[Chinese Room]] | |||
* [[Duck test]] | * [[Duck test]] | ||
* [[Intentional stance]] | * [[Intentional stance]] | ||
* [[Loebner Prize]] | * [[Loebner Prize]] | ||
* [[Philosophical zombie]] | * [[Philosophical zombie]] | ||
* [[Uncanny valley]] | * [[Uncanny valley]] | ||
* [[ | * [[Cold reading]] | ||
== References == | == References == | ||
Latest revision as of 14:36, 20 October 2025
In computer science, the ELIZA effect is a tendency to project human traits — such as experience, semantic comprehension or empathy — onto rudimentary computer programs having a textual interface. ELIZA was a symbolic AI chatbot developed in 1966 by Joseph Weizenbaum that imitated a psychotherapist. Many early users were convinced of ELIZA's intelligence and understanding, despite its basic text-processing approach and the explanations of its limitations.
History
The effect is named for ELIZA, the 1966 chatbot developed by MIT computer scientist Joseph Weizenbaum.[1] When executing Weizenbaum's DOCTOR script, ELIZA simulated a Rogerian psychotherapist, largely by rephrasing the "patientTemplate:"'s replies as questions:[2]
- Human:
Template:Trim
ELIZA:Template:Trim
Human:Template:Trim
ELIZA:Template:Trim
Human:Template:Trim
ELIZA:Template:Trim
Though designed strictly as a mechanism to support "natural language conversation" with a computer,[3] ELIZA's DOCTOR script was found to be surprisingly successful in eliciting emotional responses from users who, in the course of interacting with the program, began to ascribe understanding and motivation to the program's output.[4] As Weizenbaum later wrote, "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."[5] Indeed, ELIZA's code had not been designed to evoke this reaction in the first place. Upon observation, researchers discovered users unconsciously assuming ELIZA's questions implied interest and emotional involvement in the topics discussed, even when they consciously knew that ELIZA did not simulate emotion.[6]
In the 18th century, the tendency to understand mechanical operations in psychological terms was already noted by Charles Babbage. In proposing what would later be called a carry-lookahead adder, Babbage remarked that he found such terms convenient for descriptive purposes, even though nothing more than mechanical action was meant.[7]
Characteristics
In its specific form, the ELIZA effect refers only to "the susceptibility of people to read far more understanding than is warranted into strings of symbols—especially words—strung together by computers".[8] A trivial example of the specific form of the Eliza effect, given by Douglas Hofstadter, involves an automated teller machine which displays the words "THANK YOU" at the end of a transaction. A naive observer might think that the machine is actually expressing gratitude; however, the machine is only printing a preprogrammed string of symbols.[8]
More generally, the ELIZA effect describes any situation[9][10] where, based solely on a system's output, users perceive computer systems as having "intrinsic qualities and abilities which the software controlling the (output) cannot possibly achieve"[11] or "assume that [outputs] reflect a greater causality than they actually do".[12] In both its specific and general forms, the ELIZA effect is notable for occurring even when users of the system are aware of the determinate nature of output produced by the system.
From a psychological standpoint, the ELIZA effect is the result of a subtle cognitive dissonance between the user's awareness of programming limitations and their behavior towards the output of the program.[13]
Significance
The discovery of the ELIZA effect was an important development in artificial intelligence, demonstrating the principle of using social engineering rather than explicit programming to pass a Turing test.[14]
ELIZA convinced some users into thinking that a machine was human. This shift in human-machine interaction marked progress in technologies emulating human behavior. Two groups of chatbots are distinguished by William Meisel as "general personal assistants" and "specialized digital assistants".[15] General digital assistants have been integrated into personal devices, with skills like sending messages, taking notes, checking calendars, and setting appointments. Specialized digital assistants "operate in very specific domains or help with very specific tasks".[15] Weizenbaum considered that not every part of the human thought could be reduced to logical formalisms and that "there are some acts of thought that ought to be attempted only by humans".[16]
See also
Script error: No such module "Portal".
- Chatbot psychosis
- Chinese Room
- Duck test
- Intentional stance
- Loebner Prize
- Philosophical zombie
- Uncanny valley
- Cold reading
References
- REDIRECT Template:Reflist
Template:Redirect category shell
Further reading
- Hofstadter, Douglas. Preface 4: The Ineradicable Eliza Effect and Its Dangers. (from Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, Basic Books: New York, 1995)
- Turkle, S., Eliza Effect: tendency to accept computer responses as more intelligent than they really are (from Life on the screen- Identity in the Age of the Internet, Phoenix Paperback: London, 1997)
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ a b Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ a b Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "citation/CS1".