Advertisement
Artificial intelligence
TechBig Tech

ChatGPT usage could open companies to leaks of corporate secrets and lawsuits, cyber firm warns

  • Israeli venture firm Team8 warned that it could be hard to erase sensitive corporate information from AI chatbots, and hackers could exploit the data
  • Major technology companies including Microsoft and Alphabet are racing to add generative AI capabilities to improve chatbots and search engines

2-MIN READ2-MIN
Usage of generative artificial intelligence like ChatGPT could open companies up to having sensitive data stolen, Israeli venture firm Team8 warns in a report. Photo: Shutterstock
Bloomberg
Companies using generative artificial intelligence (AI) tools like ChatGPT could be putting confidential customer information and trade secrets at risk, according to a report from Team8, an Israel-based venture firm.

The widespread adoption of new AI chatbots and writing tools could leave companies vulnerable to data leaks and lawsuits, said the report, which was provided to Bloomberg News prior to its release. The fear is that the chatbots could be exploited by hackers to access sensitive corporate information or perform actions against the company. There are also concerns that confidential information fed into the chatbots now could be used by AI companies in the future.

Major technology companies including Microsoft Corp and Alphabet are racing to add generative AI capabilities to improve chatbots and search engines, training their models on data scraped from the Internet to give users a one-stop-shop to their queries. If these tools are fed confidential or private data, it will be very difficult to erase the information, the report said.

“Enterprise use of GenAI may result in access and processing of sensitive information, intellectual property, source code, trade secrets, and other data, through direct user input or the API, including customer or private information and confidential information,” the report said, classifying the risk as “high”. It described the risks as “manageable” if proper safeguards are introduced.

Advertisement

The Team8 report stressed that chatbot queries are not being fed into large-language models to train AI, contrary to recent reports that such prompts could potentially be seen by others.

“As of this writing, Large Language Models cannot update themselves in real-time and therefore cannot return one’s inputs to another’s response, effectively debunking this concern. However, this is not necessarily true for the training of future versions of these models,” it said.

Advertisement

The document flagged three other “high risk” issues in integrating generative AI tools and underlined the heightened threat of information increasingly being shared through third-party applications. Microsoft has embedded some AI chatbot features in its Bing search engine and Microsoft 365 tools.

Advertisement
Select Voice
Select Speed
1.00x