Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

OpenAI Alleges Its AI Models Were Used to Build DeepSeek-R1: Report

Share

OpenAI has reportedly claimed that DeepSeek might have distilled its artificial intelligence (AI) models to build the R1 model. As per the report, the San Francisco-based AI firm stated that it has evidence that some users were using its AI models’ outputs for a competitor, which is suspected to be DeepSeek. Notably, the Chinese company released the open-source DeepSeek-R1 AI model last week and hosted it on GitHub and Hugging Face. The reasoning-focused model surpassed the capabilities of the ChatGPT-maker’s o1 AI models in several benchmarks.

OpenAI Says It Has Evidence of Foulplay

According to a Financial Times report, OpenAI claimed that its proprietary AI models were used to train DeepSeek’s models. The company told the publication that it had seen evidence of distillation from several accounts using the OpenAI application programming interface (API). The AI firm and its cloud partner Microsoft investigated the issue and blocked their access.

In a statement to the Financial Times, OpenAI said, “We know [China]-based companies — and others — are constantly trying to distil the models of leading US AI companies.” The ChatGPT-maker also highlighted that it is working closely with the US government to protect its frontier models from competitors and adversaries.

Notably, AI model distillation is a technique used to transfer knowledge from a large model to a smaller and more efficient model. The goal here is to bring the smaller model on par or ahead of the larger model while reducing computational requirements. Notably, OpenAI’s GPT-4 has roughly 1.8 trillion parameters while DeepSeek-R1 has 1.5 billion parameters, which would fit the description.

The knowledge transfer typically takes place by using the relevant dataset from the larger model to train the smaller model, when a company is creating more efficient versions of its model in-house. For instance, Meta used the Llama 3 AI model to create several coding-focused Llama models.

However, this is not possible when a competitor, which does not have access to the datasets of a proprietary model, wants to distil a model. If OpenAI’s allegations are true, this could have been done by adding prompt injections to its APIs to generate a large number of outputs. This natural language data is then converted to code and fed to a base model.

Notably, OpenAI has not publicly issued a statement regarding this. Recently, the company CEO Sam Altman praised DeepSeek for creating such an advanced AI model and increasing the competition in the AI space.