logo
Remembering Eliza, one of the first chatbots: Lessons, warnings it holds for AI today

Remembering Eliza, one of the first chatbots: Lessons, warnings it holds for AI today

Indian Express02-06-2025
In 1966, at a lab at the Massachusetts Institute of Technology (MIT), computer scientist Joseph Weizenbaum unveiled one of the first chatbots in history: Eliza.
It ran on a computer that was among the most advanced at MIT at the time — the IBM 7090 — and could be accessed through a typewriter-like terminal.
Eliza had different 'scripts' — or ways of interacting — and could mimic a math teacher, poetry teacher or a quiz master, among other things. But its most famous script was called DOCTOR, which emulated a therapist.
Weizenbaum would later write about the anthropomorphisation of ELIZA, which, in his own words, led him to 'attach new importance to questions of the relationship between the individual and the computer'.
Eventually, the myth-making around it reached such an extent that the tendency or pattern to attribute human qualities to computers came to be known as the ELIZA effect.
The scientist too later spoke about the excessive reliance on computers, and would argue that no matter how impressive the machines seemed, what they pulled off could not amount to real understanding.
These concerns, and the debates that followed, still matter today as we navigate a world with rapidly developing Artificial Intelligence (AI) tools.
Weizenbaum was Jewish and fled Nazi Germany with his parents, arriving in the United States in the mid-1930s.
In 1955, Weizenbaum was part of a team at American conglomerate General Electric that automated some key banking operations for the first time.
He also developed a programming language called SLIP or 'Symmetric Lisp Processor'. This was part of an approach that worked with sentences, instead of numbers as computing had done until then.
Weizenbaum was invited to join MIT's Project MAC, a Computer Science lab. Among other things, it was the first to build an interactive time-sharing system, where multiple users could use a single computer system simultaneously.
Weizenbaum built ELIZA at the New England university between 1964-66.
What Eliza did
In the introduction to Computer Power and Human Reason: From Judgment to Calculation (1976), Weizenbaum breaks down how Eliza works in extremely simple terms.
'I composed a computer program with which one could converse in English. The human conversationalist partner would type his portion of the conversation on a typewriter connected to a computer, and the computer under control of my program, would analyse the message that had been transmitted to it, compose a response to it in English, and cause the response to be typed on the computer's typewriter,' Weizenbaum wrote.
He explains that the program consisted of two steps: a language analyser and a script.
A script, he describes, is a set of 'rules rather like those that might be given to an actor who is to use them to improvise around a certain theme. Thus, Eliza could be given a script to enable it to maintain a conversation — play a specific conversational rule.' Each time an audience member types something, ELIZA could examine it, looking for keywords that have entries in the currently active script. These words were then ranked and substituted, resulting in a response.
One interaction of the DOCTOR script, which was that of a psychotherapist, was reproduced by Weizenbaum in a paper on ELIZA. It goes as following, with the text in all caps being ELIZA's responses:
'Men are all alike.
IN WHAT WAY
They're always bugging us about something or other.
CAN YOU THINK (: A SPECIFIC EXAMPLE)
Well, my boyfriend made me come here.
YOUR BOYFRIEND MADE YOU COME HERE
He says I am depressed much of the time.
I AM SORRY TO HEAR YOU ARE DEPRESSED….'
DOCTOR ended up becoming ELIZA in popular perception.
A number of practicing psychiatrists seriously believed the DOCTOR script could grow into a nearly completely automatic form of psychotherapy, Weizenbaum wrote.
'If the method proves beneficial, then it would provide a therapeutic tool which can be made widely available to mental hospitals and psychiatric centers suffering a shortage of therapists,' one therapist wrote at the time.
Weizenbaum also documents an incident with his secretary who started conversing with ELIZA in the DOCTOR script.
'After only a few interchanges with it, she asked me to leave the room. Another time, I suggested I might rig the system so that I could examine all conversations anyone had had with it, say, overnight. I was promptly bombarded with accusations that what I proposed amounted to spying on people's most intimate thoughts; clear evidence that people were conversing with the computer as if it were a person…'
Weizenbaum wrote that he had not realised that 'extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people'.
Eliza reimagined
With Computer Science evolving rapidly, the code that constituted ELIZA was never published and reproduced.
And the original code was only discovered in 2021 among a stack of Weizenbaum's papers. It had to be copied by hand by Stanford professor Jeff Shrager who now works on a digital archival project of Eliza along with a team of multi-disciplinary academics across the world.
What it means today
It is critical in Computer Science history as it was the first to demonstrate the Turing test (how human-like a machine's responses are) of a machine replicating human language. Of course, it also set off the obsession with getting computers to talk and interact with us, leading us to this moment in history where we are able to generate personalised videos, images and text at the drop of a hat.
Digital Humanities professor David Berry at the University of Sussex, who is part of the digital archiving project along with Shrager, tells The Indian Express that 'ELIZA is is a 420-line program written in an obscure programming language which is radically different from the LLMs (large language models) like ChatGPT, a gigantic system with billion of parameters'.
'Eliza can run on any computer today and consume hardly any electricity, whereas ChatGPT consumes vast quantities of power,' Berry said.
The contemporary LLMs, which are powered by huge data centres, require 0.14 kilowatt-hours (kWh) of electricity, equal to powering 14 LED light bulbs for 1 hour, as per calculations by The Washington Post.
Berry also talks about how ELIZA 'offered a crucial early warning about human susceptibility to computational deception'.
He adds that 'examining ELIZA's source code helped to demonstrate that convincing human-computer interaction does not require genuine comprehension, rather, it can emerge from clever pattern matching and careful interface design that exploits human cognitive biases'.
'Even modern large language models, despite their impressive capabilities, fundamentally operate through statistical pattern recognition rather than genuine understanding,' Berry says.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Godfather Of AI Warns Technology Could Invent Its Own Language: 'It Gets Scary...'
Godfather Of AI Warns Technology Could Invent Its Own Language: 'It Gets Scary...'

NDTV

time3 hours ago

  • NDTV

Godfather Of AI Warns Technology Could Invent Its Own Language: 'It Gets Scary...'

Geoffrey Hinton, regarded by many as the 'godfather of artificial intelligence' (AI), has warned that the technology could get out of hand if chatbots manage to develop their language. Currently, AI does its thinking in English, allowing developers to track what the technology is thinking, but there could come a point where humans might not understand what AI is planning to do, as per Mr Hinton. "Now it gets more scary if they develop their own internal languages for talking to each other," he said on an episode of the "One Decision" podcast that aired last month. "I wouldn't be surprised if they developed their own language for thinking, and we have no idea what they're thinking." Mr Hinton added that AI has already demonstrated that it can think terrible thoughts, and it is not unthinkable that the machines could eventually think in ways that humans cannot track or interpret. Warning about AI Mr Hinton laid the foundations for machine learning that is powering today's AI-based products and applications. However, the Nobel laureate grew wary of AI's future development and cut ties with his employer, Google, in order to speak more freely on the issue. "It will be comparable with the industrial revolution. But instead of exceeding people in physical strength, it's going to exceed people in intellectual ability. We have no experience of what it's like to have things smarter than us," said Mr Hinton at the time. "I am worried that the overall consequence of this might be systems more intelligent than us that eventually take control." Mr Hinton has been a big advocate of government regulation for the technology, especially given the unprecedented pace of development. His warning also comes in the backdrop of repeated instances of AI chatbots hallucinating thoughts. In April, OpenAI's internal tests revealed that its o3 and o4-mini AI models were hallucinating or making things up much more frequently than even the non-reasoning models, such as GPT-4o. The company said it did not have any idea why this was happening. In a technical report, OpenAI said, "more research is needed" to understand why hallucinations are getting worse as it scales up its reasoning models.

The Webb telescope saw something no one was supposed to see; key details inside
The Webb telescope saw something no one was supposed to see; key details inside

Time of India

time6 hours ago

  • Time of India

The Webb telescope saw something no one was supposed to see; key details inside

The James Webb Space Telescope recently opened a new window onto one of the universe's more mysterious secrets. For the first time, it has imaged black holes feeding stealthily on stars from deep within dusty galaxies, where even ordinary telescopes are blocked. These results, published in the Astrophysical Journal Letters on August 1, show how JWST's intense infrared sight enables astronomers to look through dense clouds of cosmic dust. And what it's showing is a long hidden face of black holes and to particularly those previously believed to be asleep. Black holes that are not actively gathering matter are not possible to detect under normal conditions. But then something alters when a star wanders in too close. The black hole's incredible gravity distorts the star into a spinning disk of scorching gas. This event, called a tidal disruption event (TDE), briefly wakes the black hole. As the gas disk warms, it emits energy in the form of X-rays, ultraviolet, and visible light—causing the black hole to be seen for a brief period. What do the MIT researchers say about this? But these standard wavelengths of light can be totally obscured if the black hole is occulted by thick dust, says MIT astrophysicist Megan Masterson. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Walmart Photos Which Are Not For Everyone Old Money Style Undo That's why most observed TDEs have been seen in comparably dust-poor areas. But that doesn't mean they're not occurring in more dusty galaxies as well, they're just much more difficult to detect. Luckily, dust has a tell. When it absorbs radiation from the gas of a shredded star, it releases it as infrared light. In previous research, Masterson and colleagues went through old infrared data and discovered 12 probable TDEs that had escaped detection. Next, JWST has focused on four of those occurrences. With its power to identify a broader spectrum of infrared signals than any previous telescope, it detected emission from electrons that had been ejected from atoms. That type of activity typically succeeds a burst of intense X-ray and ultraviolet emissions, another indicator that a black hole has just eaten a star. The scientists say it's doubtful these signals originated from actively feeding black holes, which spend most of their time eating and lounging in dense clouds of dust. Rather, what's seen as silicate dust patterns indicates something more fleeting. Computer simulations verified that these indicators are consistent with what would be expected from a long-dormant black hole waking up for just long enough to cannibalize a star. Strangely enough, this type of infrared radiation takes a few months longer to arrive than other wavelengths. That's because it takes time for light from the exploded star to travel and warm the surrounding dust. But as Masterson puts it, this tardy infrared glow is one of the best and only methods of observing black holes obscured by veils of dust. Now, with JWST, researchers are finally in a position to reveal such ancient cosmic meals. And it's just the start.

IAF Agniveer Vayu Recruitment 2025: Registration Closing Tomorrow At agnipathvayu.cdac.in- Check Eligibility Criteria, Direct Link To Apply Here
IAF Agniveer Vayu Recruitment 2025: Registration Closing Tomorrow At agnipathvayu.cdac.in- Check Eligibility Criteria, Direct Link To Apply Here

India.com

time9 hours ago

  • India.com

IAF Agniveer Vayu Recruitment 2025: Registration Closing Tomorrow At agnipathvayu.cdac.in- Check Eligibility Criteria, Direct Link To Apply Here

IAF Agniveer Vayu Recruitment 2025: The Indian Air Force (IAF) will close the registration window for the Agniveer Vayu recruitment 2025 tomorrow, i.e. 4th August, 2025. All the candidates who are interested and eligible to apply can now register themselves through the official website, i.e. IAF Agniveer Vayu Recruitment 2025; Direct Link to Apply IAF Agniveer Vayu Recruitment 2025: Eligibility Criteria For Agniveer recruitment, applicants must be at least 17.5 years old and not older than 21 years. Only those candidates whose date of birth falls between July 2, 2005, and January 2, 2009, are eligible to apply. The recruitment process is divided into two categories based on educational background: Science and Non-Science. For the Science category, candidates must have passed Class 12 from a recognised board with at least 50 per cent marks in aggregate, with Mathematics, Physics, and English as subjects. Candidates with a three-year engineering diploma in Mechanical, Electrical, Electronics, Automobile, Computer Science, Instrumentation, or IT, with a minimum of 50 per cent marks, are also eligible, provided they have scored at least 50 per cent in English. For the Non-Science category, candidates must have scored at least 50 per cent marks in Class 12 in any stream and 50 per cent marks in English. Those who have completed a two-year vocational course with a minimum of 50 per cent marks are also eligible to apply. IAF Agniveer Vayu Recruitment 2025: Application Fees Candidates must know that they will have to pay the application fees of Rs. 550 to register themselves for the Agniveer Vayu recruitment 2025. IAF Agniveer Vayu Recruitment 2025: Steps to Apply Step 1: Go to the official website- Step 2: You will find the button of 'Candidate login' on the homepage, click on it then select the 'Register' button. Step 3: Register yourself with your email ID and mobile number. Step 4: Login into your account using the registered details. Step 5: Click on the 'Intake 02/2026' from the options and fill the application form with your personal and academic information carefully. Step 6: Upload the required documents according to the instructions and pay the application fees to complete the process. Step 7: Re-check everything and submit the form, then save a copy for future reference. All the candidates are advised to keep checking the official website for all the important updates.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store