Huawei’s newest Ascend 910D AI chip, designed to challenge Nvidia’s leading H100 processor, is entering its testing phase as Huawei seeks to close the performance gap and capitalize on U.S. export restrictions that have hampered its American rival. The Shenzhen-based company has begun distributing sample units of the Ascend 910D to select domestic technology partners, aiming to validate its architecture in real‐world AI workloads by late May. While the chip’s raw performance targets exceed those of Nvidia’s 2022‐vintage H100, Huawei is also advancing system‐level solutions—such as its CloudMatrix 384 computing array—to compensate for gaps in process technology. Despite these ambitions, U.S. sanctions on advanced foundry tools and memory exports continue to pose obstacles, even as Beijing intensifies support for domestic semiconductor self‐reliance.
Huawei’s Strategic AI Push
Huawei’s AI ambitions are part of a broader national drive toward technological self‐sufficiency amid escalating U.S. trade tensions and chip export curbs. At a Politburo meeting in late April, China’s leadership reiterated a “self‐reliance and self‐strengthening” strategy focused on building independent hardware and software systems for artificial intelligence. This policy backdrop has fueled massive investment in domestic chip design and production, with companies like Huawei and Cambricon vying to replace Nvidia’s entrenched position in AI training and inference markets.
While Huawei’s consumer electronics business has faced headwinds, its cloud and enterprise divisions have maintained robust growth, underpinned by demand for AI infrastructure. The Ascend chip family, introduced in 2019, has evolved through successive iterations—Ascend 310 for inference, Ascend 910 for training, and the interim 910B/C models—to today’s 910D, each step pushing performance and integration at scale.
Hong Kong–listed sources report that Huawei completed initial tape‐outs for the Ascend 910D in early April and has since approached leading Chinese cloud providers, AI startups and telecom operators to participate in its first test campaigns. These partners are slated to receive sample units of the 910D by late May, enabling benchmarks on popular AI frameworks and data‐center workloads. Huawei engineers will collaborate closely with users to refine firmware, optimize power management and address any thermal constraints uncovered during early trials.
The 910D represents a mid‐generation upgrade over the 910C, which itself combined two SMIC‐manufactured 910B dies into a single package and achieved performance nearing that of the Nvidia H100 on certain inference tasks. By contrast, the 910D is reported to use a more advanced SMIC process node and architectural tweaks to increase on‐chip bandwidth and compute density, targeting over 350 TFLOPS in FP16 operations—surpassing the H100’s published figures.
System-Level Solutions: The CloudMatrix 384
Recognizing the challenges of matching Nvidia’s process technology, Huawei has doubled down on system engineering. Its CloudMatrix 384 supercomputing node, unveiled in April, interconnects up to 384 Ascend 910C or 910D chips using proprietary high‐speed fabric, delivering aggregate performance that rivals much larger Nvidia DGX clusters. This “brute-force” approach leverages scale to offset individual chip limitations, enabling training of large AI models with lower latency and higher throughput. Customers in telecommunications, finance and e-commerce sectors are already piloting CloudMatrix deployments to support recommendation engines, fraud detection and autonomous‐driving research.
Despite these advances, Huawei’s aspirations confront multiple headwinds. U.S. export controls have blocked access to extreme‐ultraviolet lithography tools and the latest high‐bandwidth memory components—critical for next-generation AI chips. The Trump administration’s addition of Nvidia’s H20 processor to the restricted list underscores the strategic race for AI hardware dominance, as it curbs American firms’ ability to supply China even as U.S. policymakers debate carve-outs for cloud providers.
Moreover, independent benchmarks and industry feedback highlight persistent software hurdles. Huawei’s CANN software stack, while improving, still trails Nvidia’s CUDA ecosystem in maturity and ease of use—slowing adoption among AI developers accustomed to Nvidia’s toolchains. To address this, Huawei has dispatched specialist support teams to major AI labs and partnered with universities to expand training on the Ascend platform.
Mass Shipments of Interim Models
Ahead of the 910D rollout, Huawei plans to fulfill orders for more than 800,000 units of the earlier 910B and 910C chips in 2025. These shipments will go to state-owned telecom operators, hyperscale cloud services and private AI developers—most notably ByteDance, whose data-center expansions hinge on cost-effective accelerators. This volume helps amortize R&D costs and build an ecosystem around Ascend hardware, creating a ready market for subsequent 910D upgrades.
Nvidia’s earnings reports for Q1 2025 revealed a multi‐billion‐dollar inventory charge linked to U.S. export restrictions, signaling both the potency of policy as a competitive lever and an opening for rivals like Huawei. China’s broader push for semiconductor self-reliance has prompted state planners to encourage AI firms to prioritize domestic chips, offering subsidies and procurement mandates for Ascend-based systems. Analysts predict that if Huawei successfully transitions from testing to volume production of the 910D, it could capture a meaningful share of China’s $20 billion AI chip market by 2026.
Globally, Huawei’s progress will be watched closely by cloud providers in Eurasia and the Middle East, many of which currently rely on Nvidia hardware. Early test results of the 910D could influence purchasing decisions in regions seeking alternatives amid U.S.-China decoupling and export control uncertainty.
Outlook and Next Steps
Huawei executives acknowledge that commercial deployment of the Ascend 910D remains contingent on rigorous validation across diverse AI workloads, from large-language models to generative vision systems. The company aims to complete its testing cycle by Q3 2025, with volume shipments slated for late in the year, contingent on securing adequate foundry capacity and memory supply. Concurrently, Huawei is rumored to be advancing plans for the Ascend 920, projected to match Nvidia’s H20 chip performance and slated for announcement in late 2025.
As geopolitical tensions persist and technology sovereignty becomes a rallying cry, Huawei’s ability to deliver on the Ascend 910D’s promise will test the limits of China’s semiconductor ecosystem and reshape competitive dynamics in AI hardware globally. The coming months will reveal whether Huawei can transition from a defensive innovator to a credible challenger at the frontier of AI acceleration.
(Adapted from CoinTeleeraph.com)









