The dark side of AI: How a bot triggered a Belgian national to commit suicide

In the sixties, MIT computer scientist Joseph Weizenbaum created a chatbot.

He named it ELIZA after the fictional Eliza Doolittle from George Bernard Shaw’s play Pygmalion.

ELIZA, though basic in comparison with the bots of today, was created to have some form of conversation between humans and machines.

The tests led to a surprising result: early users felt that ELIZA had human characteristics, and they felt they were talking to a real person who understood their inputs (feelings).

They started to trust ELIZA with their secrets.

A few days back, a sad incident occurred when a Belgium man died by suicide after talking with an AI chatbot for around six weeks on an app called Chai.

Incidentally, the name of this chatbot was also ELIZA and it uses GPT-J technology, the open-source alternative to OpenAI’s GPT-3.

The man was deeply concerned about climate crisis, and ELIZA made him believe that by offering his life he would be able to save humanity.

The man’s wife testified anonymously in the Belgian newspaper La Libre on Tuesday, and said that “If it wasn’t for Eliza, he would still be here. I am convinced of that.”

The man’s widow provided chat logs that indicated how the chatbot encouraged the man to take his own life.

 Editorial opinion

Why it matters? Around a year back, a Google employee named Blake Lemoine made headlines when he  claimed that that one of Google’s ChatGPT-like artificial intelligence language model, called LaMDA (Language Models for Dialogue Applications), had achieved ‘sentience.’

And now this incident has set a serious precedent.

AI today must become everybody’s business. While there is lot of potential, the risks are as huge.

Just a few days back the likes of Elon Musk, Steve Wozniak, along with over 1000 tech luminaries, scientists, and business leaders signed an open letter stating that AI systems with human-competitive intelligence can pose profound risks to society and humanity.

Weizenbaum later wrote, “I had not realised that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

He spent the rest of his life sounding alarm bells over the dangers of AI in the hands of powerful companies and governments.

 

Disclaimer: The views expressed in this article are those of the author and do not necessarily reflect the views of ET Edge Insights, its management, or its members

Scroll to Top