DeepSeek paper offers new details on how it used 2,048 Nvidia chips to take on OpenAI
In a paper co-authored by founder Liang Wenfeng, the start-up attributes its success to a hardware-software co-design approach

“Insights into DeepSeek-V3: Scaling Challenges and Reflections on Hardware for AI Architectures”, co-authored by DeepSeek founder Liang Wenfeng and released on Wednesday, attributes the start-up’s breakthrough in training high-performance, cost-efficient AI systems to a hardware-software co-design approach.
The paper details technical optimisations that boost memory efficiency, streamline inter-chip communication, and enhance overall AI infrastructure performance – key advancements for reducing operational costs while scaling capabilities. These offer a “practical blueprint for innovation in next-generation AI systems”, the researchers said.
DeepSeek also highlighted its use of a mixture-of-experts (MoE) model architecture, a machine-learning approach that divides an AI model into separate sub-networks, or experts, each focused on a subset of the input data while working collaboratively.