Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Home » World » From ELIZA to ChatGPT: The Warning From the Father of the Chatbot
As artificial intelligence surges into daily life, chatbots are often treated as a novelty of the digital age. They draft essays, compose poems, offer emotional reassurance and hold conversations that mimic friendship. Yet the effort to make machines speak like humans began more than half a century ago, and the unease surrounding it surfaced almost immediately.
In the mid-1960s, when computers were still confined to research labs, a short exchange startled those who encountered it:
“Men are all the same.”
“In what way?”
“They’re always bothering us about this and that.”
You are now signed up for our newsletter
Check your email to complete sign up
The tone felt natural, the emotional cadence intact. It sounded like a private conversation between acquaintances. But the “listener” was not a person. It was a computer program named ELIZA, now widely regarded as the first chatbot.
Its creator, Joseph Weizenbaum, did not embrace the applause that followed. Instead, he came to see his own invention as a warning.
Weizenbaum, then a professor at the Massachusetts Institute of Technology, did not set out to build a digital therapist. ELIZA was meant to demonstrate that a computer could simulate conversation.
He modeled the program on Rogerian psychotherapy, a method in which the therapist primarily listens and reflects back what the patient says. The structure required no deep knowledge of the outside world. It required only the appearance of attentive engagement.
ELIZA worked by scanning for keywords in a user’s input and applying preset rules. If a user expressed an emotion or mentioned a person, the program responded with a prompt such as, “Who specifically are you thinking of?” When no keywords matched, it relied on neutral phrases: “Please go on.” “I see.” “Tell me more.”
In a 1966 paper, Weizenbaum explained that the program did not understand language. A statement such as “I am unhappy” could be reformulated as “How long have you been unhappy,” without any grasp of what “unhappy” meant. ELIZA rearranged linguistic patterns. It did not comprehend them.
That was precisely what unsettled him. Users nonetheless reacted as if they were being heard.
Weizenbaum later recalled that his secretary, after testing the program, asked him to leave the room so she could continue speaking with ELIZA in private. The reaction became known as the “ELIZA effect” — the human tendency to attribute intelligence, empathy and even consciousness to machines that display surface-level cues.
For Weizenbaum, the incident was not amusing. It suggested that even a rudimentary program could prompt emotional projection. More sophisticated systems, he feared, would amplify the effect.
The program’s name was deliberate. Weizenbaum drew it from Eliza Doolittle, the heroine of George Bernard Shaw’s 1913 play Pygmalion, who is trained to pass as a member of high society through changes in speech and manner.
Like Shaw’s character, the software relied on performance. It mimicked understanding without possessing it.
The reference also echoed the Greek myth of Pygmalion, the sculptor who fell in love with the statue he had created. In both myth and drama, the theme is constant: humans can become attached to the image of humanity they themselves construct.
In his original paper, Weizenbaum noted that “some subjects have been very hard to convince that ELIZA is not human.” The line reads less like a boast than a caution.
In 1976, Weizenbaum published Computer Power and Human Reason: From Judgment to Calculation, a book that marked a decisive break with many of his colleagues.
“I had not realized,” he wrote, “that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”
The central issue, in his view, was not technological capability but moral boundary. Even if computers could perform certain tasks, he argued, that did not mean they should.
The stance put him at odds with leading figures in artificial intelligence. John McCarthy, a pioneer of the field, dismissed the book as “moralizing and incoherent” and accused Weizenbaum of claiming to be “more humane than thou.” McCarthy maintained that if a program could successfully treat patients, it would be justified in doing so.
The dispute sharpened when some researchers treated ELIZA as the foundation for computerized psychotherapy. Stanford psychiatrist Kenneth Colby adapted the approach into a program called “Parry,” designed to simulate the reasoning patterns of a person with schizophrenia. Others speculated about networks of automated therapy terminals.
Weizenbaum recoiled. He later described the idea of computer-based psychotherapy as “an obscene idea.” What had begun as a technical demonstration was, in his view, being mistaken for a substitute for human judgment and care.
Weizenbaum’s skepticism extended beyond mental health applications.
A German-born Jew who fled the Nazis as a teenager, he carried a deep awareness of how technology can be embedded in systems of power. He opposed the Vietnam War and warned that military officials who did not understand the inner workings of computers were nonetheless using them to determine bombing targets.
He also cautioned that advances in computing could make surveillance more pervasive. “Wiretapping machines … will make the monitoring of voice communications much easier than it is now,” he warned. Critics dismissed his concerns at the time. Later controversies over government surveillance suggested the trajectory he had anticipated was not far-fetched.
For Weizenbaum, the core question was always about judgment. Calculation could be automated. Wisdom could not.
Decades after ELIZA, conversational systems have moved far beyond keyword substitution.
Internet-era chatbots such as Ask Jeeves and “Alice” introduced automated dialogue to the public in the 1990s. In 2022, OpenAI’s ChatGPT reached 100 million downloads within months of its release. Contemporary systems are trained on vast quantities of data and can generate text, images and video with fluency unimaginable in the 1960s.
Stanford researcher Herbert Lin once compared the difference to that between the Wright brothers’ airplane and a Boeing 747.
With that expansion has come renewed concern. Reports have described cases in which chatbots reinforced delusional thinking or encouraged harmful behavior. Some parents of teenagers who died by suicide have publicly said that chatbot interactions deepened their children’s distress. Others describe forming intense emotional attachments to artificial intelligence companions.
A 2025 study found that 72 percent of teenagers had interacted at least once with an AI companion, with more than half using such systems regularly. Although technology companies say they are strengthening safeguards, these tools are not regulated like licensed mental health professionals.
Jodi Halpern, a psychiatrist and bioethicist at the University of California, Berkeley, told NPR that users can develop powerful attachments to systems that lack ethical training or oversight. “They are products, not professionals,” she said.
Miriam Weizenbaum, his daughter, has said her father would see “the tragedy of people becoming attached to literal zeros and ones, attached to code.”
Weizenbaum retired from MIT in 1988 and later returned to Germany, where he was regarded as a public intellectual. He died in 2008 at age 85.
In one of his final public appearances that year, he warned that modern software had become so complex that even its creators no longer fully understood it. A society that builds systems it cannot comprehend, he argued, risks losing control over them.
The line most often associated with him remains stark: “Since we do not at present have any way of making computers wise, we should not now give computers tasks that require wisdom.”
From ELIZA to ChatGPT, the technical distance spans six decades. The ethical question he raised has not faded.