Artificial Intelligence

The dark side of AI: How a bot triggered a Belgian national to commit suicide

In the sixties, MIT computer scientist Joseph Weizenbaum created a chatbot.

He named it ELIZA after the fictional Eliza Doolittle from George Bernard Shaw’s play Pygmalion.

ELIZA, though basic in comparison with the bots of today, was created to have some form of conversation between humans and machines.

The tests led to a surprising result: early users felt that ELIZA had human characteristics, and they felt they were talking to a real person who understood their inputs (feelings).

They started to trust ELIZA with their secrets.

A few days back, a sad incident occurred when a Belgium man died by suicide after talking with an AI chatbot for around six weeks on an app called Chai.

Incidentally, the name of this chatbot was also ELIZA and it uses GPT-J technology, the open-source alternative to OpenAI’s GPT-3.

The man was deeply concerned about climate crisis, and ELIZA made him believe that by offering his life he would be able to save humanity.

The man’s wife testified anonymously in the Belgian newspaper La Libre on Tuesday, and said that “If it wasn’t for Eliza, he would still be here. I am convinced of that.”

The man’s widow provided chat logs that indicated how the chatbot encouraged the man to take his own life.

 Editorial opinion

Why it matters? Around a year back, a Google employee named Blake Lemoine made headlines when he  claimed that that one of Google’s ChatGPT-like artificial intelligence language model, called LaMDA (Language Models for Dialogue Applications), had achieved ‘sentience.’

And now this incident has set a serious precedent.

AI today must become everybody’s business. While there is lot of potential, the risks are as huge.

Just a few days back the likes of Elon Musk, Steve Wozniak, along with over 1000 tech luminaries, scientists, and business leaders signed an open letter stating that AI systems with human-competitive intelligence can pose profound risks to society and humanity.

Weizenbaum later wrote, “I had not realised that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

He spent the rest of his life sounding alarm bells over the dangers of AI in the hands of powerful companies and governments.

 

Ashwani Mishra

Recent Posts

Employees drive AI adoption beyond organizational boundaries: Study

In the bustling corridors of corporate offices worldwide, 2024 was heralded as the year Artificial…

9 hours ago

AI: Transforming aviation safety, efficiency, and passenger experience

The aviation industry is on the cusp of a transformation, fueled by the growing power…

9 hours ago

Skechers named official kit sponsor for the All India Pickleball Association (AIPA)

All India Pickleball Association (AIPA) India’s premiere organisation has inked a huge partnership agreement with…

14 hours ago

UnivLabs Technologies implements HostBooks ERP Solutions to enhance operational efficiency

HostBooks Limited has successfully implemented its ERP solutions at UnivLabs Technologies Pvt. Ltd., a medical…

15 hours ago

From congestion to convenience: E-scooters as a solution for urban gridlock

In the wake of rapid urbanisation and a growing population, traffic congestion has become a…

17 hours ago