Microsoft and OpenAI are investigating whether a group linked to Chinese AI startup DeepSeek accessed OpenAI’s data without authorization.
According to sources, Microsoft’s security team observed unusual data activity last fall, suggesting that individuals connected to DeepSeek may have extracted large amounts of data through OpenAI’s API.
This API allows developers to integrate OpenAI’s AI models into their applications, but improper use raises concerns about security and intellectual property protection.
The investigation aims to determine if the data extraction was within OpenAI’s licensing agreements or if it involved unauthorized access. If proven, this could highlight broader concerns about how AI models are being used and whether sufficient safeguards are in place to prevent data leaks.
Microsoft and OpenAI are taking the matter seriously, as AI security and compliance are becoming critical issues in the rapidly evolving industry.
DeepSeek, a rising AI company based in China, has gained attention recently for its advancements in artificial intelligence.
However, its connections to this potential data breach could create tensions, especially amid ongoing concerns over data security between China and the U.S. Both Microsoft and OpenAI have declined to comment publicly, emphasizing the confidential nature of the investigation.
This case highlights the growing challenges in protecting AI technologies and ensuring that proprietary data is not misused.
With AI development progressing at a rapid pace, securing sensitive information is a top priority for companies investing in advanced models. As investigations continue, the findings could shape future policies on AI data security and international AI collaborations.