ChatGPT usage could open companies to leaks of corporate secrets and lawsuits, cyber firm warns
- Israeli venture firm Team8 warned that it could be hard to erase sensitive corporate information from AI chatbots, and hackers could exploit the data
- Major technology companies including Microsoft and Alphabet are racing to add generative AI capabilities to improve chatbots and search engines

The widespread adoption of new AI chatbots and writing tools could leave companies vulnerable to data leaks and lawsuits, said the report, which was provided to Bloomberg News prior to its release. The fear is that the chatbots could be exploited by hackers to access sensitive corporate information or perform actions against the company. There are also concerns that confidential information fed into the chatbots now could be used by AI companies in the future.
“Enterprise use of GenAI may result in access and processing of sensitive information, intellectual property, source code, trade secrets, and other data, through direct user input or the API, including customer or private information and confidential information,” the report said, classifying the risk as “high”. It described the risks as “manageable” if proper safeguards are introduced.
The Team8 report stressed that chatbot queries are not being fed into large-language models to train AI, contrary to recent reports that such prompts could potentially be seen by others.
“As of this writing, Large Language Models cannot update themselves in real-time and therefore cannot return one’s inputs to another’s response, effectively debunking this concern. However, this is not necessarily true for the training of future versions of these models,” it said.
The document flagged three other “high risk” issues in integrating generative AI tools and underlined the heightened threat of information increasingly being shared through third-party applications. Microsoft has embedded some AI chatbot features in its Bing search engine and Microsoft 365 tools.