With ChatGPT’s rising popularity, corporates to countries issue ‘code red’

From enterprises like JPMorgan Chase, Amazon and Verizon to countries like Italy are blocking ChatGPT usage citing data leakage and misinformation concerns.

ChatGPT has arguably been the biggest talking point over the last few months.

As a large language model trained on a massive dataset, Open AI’s ChatGPT is an incredibly powerful tool that can be utilised by enterprises in a variety of ways.

With its ability to understand natural language and generate human-like responses, it can be used to enhance customer service, automate business processes, and improve decision-making.

However, achieving desired results in an unsupervised and complex machine learning model, can also prove to be a problem and pose a huge business risk.

A few days back, electronics manufacturer Samsung found itself in hot water as it recorded three instances of data leakage.

The accidental leaks reported were as follows:

Leak #1: An employee submitted database source code to fix errors.

Leak #2: Source code related to semiconductor equipment was fed into ChatGPT. This data became part of the AI chatbot’s learning database and was accessible to anyone using the model.

Leak #3: An employee asked ChatGPT to create meeting minutes by sharing a company recording.
Post the revelation of these leaks, Samsung implemented strict measures by restricting each employee’s ChatGPT prompt to a maximum of 1024 bytes. It is also working towards creating its own proprietary AI system.

“The same story that happened to Samsung is playing out right now in companies all over the world, whether they know it or not. It is important to note here, this applies to anything, not just code. So if you’re writing a blog post, and you write it into chat GPT, you’re doing the same thing: You’re handing over your data to OpenAI,” states Alvin Foo, Co-Founder, ThirdFi.

Feeding company data into ChatGPT: Who’s watching

According to a report by Cyberhaven, as of March 21, 8.2% of employees have used ChatGPT in the workplace and 6.5% have pasted company data onto it since it launched.

Cyberhaven Labs analysed ChatGPT usage for 1.6 million workers at companies across industries that use the Cyberhaven product. The report found that it is difficult for security products to monitor the usage of ChatGPT and protect data for two key reasons:

Copy/paste out of a file or app — When one inputs company data into ChatGPT, they don’t upload a file but rather copy and paste the content into their web browser. Many security products are designed around protecting files (which are tagged confidential) from being uploaded, but once content is copied out of the file they are unable to keep track of it.

Confidential data contains no recognisable pattern — Company data going to ChatGPT often doesn’t contain a recognisable pattern that security tools look for, like a credit card number or Social Security number.

Many South Korean enterprises, SK Hynix, LG, and steelmaker POSCO are now working to create guidelines for the proper use of ChatGPT and other AI-enabled services. The core objective is to avoid any pitfalls arising from the usage of these programmes, including data leaks.

Large enterprises have issued a ‘code red’ and banned the usage of ChatGPT citing security concerns.

A report released by professional social site Fishbowl in February found that 43% of professionals have used AI tools, including ChatGPT, for work-related tasks. Nearly 70% of those professionals are doing so without their boss’ knowledge.

As per media reports, companies like JPMorgan Chase, Amazon, Verizon, and Accenture have made it clear that their employees will not be using ChatGPT for work.

And it’s not only the corporates who are wary of the AI tool.

Many governments across the globe are now actively involved to find out whether the OpenAI chatbot poses a threat to data privacy, and the regulation measures needed.

Here’s how countries are reacting to ChatGPT

Italy has decided to temporarily block access to ChatGPT over privacy issues. The country’s National Data Protection Authority took this decision. Germany may soon follow in Italy’s footsteps by blocking ChatGPT over data security concerns.

The UK government has urged regulators to come up with rules for AI. It is a step toward regulating AI.

The US government has published a blueprint for a potential AI Bill of Rights to minimise any harm caused by the rise of AI.

China has not officially blocked ChatGPT but has issued an order to big tech companies in the country asking them not to provide access to OpenAI’s ChatGPT or other AI-powered chatbot-powered services. Beijing regulators look at the tool as a source of spreading misinformation. Incidentally, OpenAI CEO, Sam Altman also believes these models could be used for large-scale dis-information.

OpenAI does not allow access to users from intensive internet censorship countries like Russia, North Korea, Egypt, Iran, Ukraine, and more.

India has yet to raise an alarm like policymakers in the US and Europe. It is not considering bringing a law or regulating the growth of artificial intelligence in the country. The Ministry of Electronics and Information Technology has called AI a ‘kinetic enabler of the digital economy.’

Disclaimer: The views expressed in this article are those of the author and do not necessarily reflect the views of ET Edge Insights, its management, or its members

Scroll to Top