Beijing: Chinese artificial intelligence company DeepSeek has announced that it trained its latest large language model, DeepSeek R1, at a cost of just $294,000, a figure far below the hundreds of millions of dollars often associated with building advanced AI systems.
The company disclosed the details in a peer-reviewed article published in the scientific journal Nature, marking the first time it has publicly revealed the training cost of its models. According to the report, the R1 model was trained using 512 Nvidia H800 graphics processing units, with preparatory work carried out on Nvidia’s A100 chips. The main training phase lasted 80 hours, highlighting what DeepSeek describes as a more efficient use of resources compared to Western rivals.
The announcement has drawn international attention because of the stark contrast to cost estimates from major U.S. firms. OpenAI chief executive Sam Altman has previously said that training top-tier models required more than $100 million, making DeepSeek’s claim of under $300,000 appear disruptive to the economics of the industry.
DeepSeek’s revelation follows earlier controversy over its approach to model development. Some U.S. officials and industry figures have accused the Chinese firm of “distilling” outputs from other AI models, including those of OpenAI, into its own systems. In response, DeepSeek stated that while its training data included some web pages containing OpenAI-generated responses, their inclusion was incidental rather than deliberate.
The company’s January disclosures had already shaken investor confidence in global AI stocks, with fears that low-cost training methods could undermine dominant players and hardware providers such as Nvidia. The latest details, now formally published, are expected to fuel further debate over how rapidly AI development costs may be falling.
Industry analysts caution, however, that the true scope of DeepSeek’s achievement remains unclear. The reported training cost may not account for infrastructure, power consumption, or ongoing optimization expenses. Questions also remain about how R1 performs against leading models in terms of reasoning, reliability, and safety.
DeepSeek’s claims underscore China’s push to achieve breakthroughs in AI despite export restrictions on advanced U.S. chips. If validated, the company’s low-cost model could reshape competition in the global AI race, lowering barriers for smaller firms and intensifying scrutiny over the transparency of training methods.
As regulators, investors, and rivals assess the impact, DeepSeek’s announcement signals that the future of AI development may no longer be defined solely by scale and financial power, but increasingly by efficiency and innovation.