Chinese startup DeepSeek released a preview version of its V4 artificial intelligence model, adapted to run on Huawei's new Ascend AI chips.
The open-source model arrives in two variants: Pro, a higher-end system targeting competitive performance against closed-source rivals, and Flash, a cheaper and faster alternative. Both versions support a 1-million-token context window.
DeepSeek said V4 is designed to work with agent frameworks, including Claude Code and OpenClaw, reflecting the industry's shift from prompt-based chatbots toward models that can execute multi-step tasks with less human input. In maximum reasoning mode, Pro outperforms all open-source models, according to a paper released alongside the preview, though it still trails frontier closed-source systems such as Google's Gemini 3.1 Pro and OpenAI's GPT-5.4 in some areas.
A key departure from earlier DeepSeek releases is the integration with Huawei hardware. According to a report by Reuters on the matter, Huawei confirmed hours after the preview that V4 is fully supported on its Ascend 950-based supernode clusters and that its chips were used for part of Flash's training.
"Through close technical collaboration ... the entire Ascend supernode product line now supports the DeepSeek-V4 series models," Huawei said, as per the Reuters report. DeepSeek did not specify whether V4 was also trained on Nvidia chips. Its earlier V3 and R1 models were built using Nvidia hardware.
DeepSeek said Pro can cost up to 12 times more than Flash due to "constraints in high-end compute capacity," limiting current availability. As per Reuters, the company expects Pro pricing to drop sharply once Huawei Ascend 950 supernodes are deployed at scale in the second half of the year. DeepSeek faces compute restrictions under US export controls on Nvidia chips and chipmaking equipment.