AsianFin -- Intellifusion, one of China’s earliest AI chip developers, is making a strategic pivot toward AI model inference—betting that the era of training dominance is giving way to inference-led growth in computing demand.
The Shenzhen-based firm, listed on Shanghai’s STAR Market , unveiled its latest suite of inference-focused products on July 25, ahead of the 2025 World Artificial Intelligence Conference. Among them: the DeepQiong X6000 Mesh inference accelerator card, boasting 256 TOPS of compute and optimized for high-throughput workloads such as decoding 256 video streams in real-time and supporting large models with hundreds of billions of parameters.
Intellifusion’s new all-in-one servers—Shenmu 6203 , Tianzhou 6408 , and Tianzhou 680G —extend this performance into data centers and edge environments, delivering up to 4 PFLOPS of inference capacity. CEO Chen Ning says these products mark a turning point for the company, which is now “fully committed” to inference computing chips after 11 years of neural processing unit development.
“2025 will be a defining year for AI. Large models are maturing, costs are falling, and inference is about to outpace training in both growth and application,” Chen told TMTPost.
AI development is typically divided into two stages: training, which demands massive datasets and compute, and inference, where trained models are deployed to solve real-world problems. As AI adoption broadens—from chatbots to autonomous vehicles—cloud-based inference is quickly taking center stage.
According to IDC, cloud-based inference accounted for 58.5% of AI computing power in 2022 and is projected to hit 62.2% by 2026. AMD CEO Lisa Su forecasts AI inference compute demand will grow over 80% annually—potentially surpassing training as the primary driver for data center expansion.
“The inference chip market remains a blue ocean,” Chen said. “While the training chip sector is worth hundreds of billions, inference is just beginning. We believe it will outpace training within five years.”
At the heart of Intellifusion’s new offerings is the DeepQiong X6000 Mesh accelerator card, powered by the firm’s self-developed fourth-generation NPU optimized for Transformer-based models. The card uses a D2D Chiplet design and C2C mesh architecture—an innovation in China’s AI chip ecosystem. Intellifusion claims it is the first company to mass-produce such chips using fully domestic fabrication and packaging processes.
Complementing the chip, Intellifusion is rolling out inference servers and integrated machines for data centers and smart city deployments. Customers include municipal computing centers, telecom carriers, research institutes, and major Chinese internet firms.
“The DeepSeek all-in-one machines break the ‘last mile’ in closed-loop AI deployment,” Chen said, adding that the cooling AI hype is not a retreat, but a rational reshuffling to real-world use cases.
Intellifusion’s shift is already showing results. The company reported 2024 revenue of more than 900 million yuan , up 81.3% year-on-year. Q1 2025 revenue surged 168.2% to 264 million yuan, a record for the period.
A deal with Deyuan Fanghui to provide 4,000 PFLOPS in inference compute over three years is expected to contribute 1.6 billion yuan in revenue. Payments began in early 2025, with roughly 200 million yuan booked in the first half.
On the consumer side, Intellifusion is seeing strong uptake of its Qiancheng AI technologies in wearables, supplying Huawei, Honor, and OPPO, while its “Dr. Luka” hardware line continues to gain traction. The company expects 50%+ growth in its consumer business in H1 2025.
Looking ahead, Intellifusion is preparing to launch its next-generation inference chip architecture—“Computing Power Building Blocks 2.0”—by late 2026, featuring:
: Native FP8/FP4, custom operators for large models, 5× compute efficiency, 3× energy efficiency.
: 10× bandwidth and memory efficiency.
: Full-mesh, all-reduce, memory semantic access.
: Heterogeneous die, UCIE D2D Chiplets.
: PCIe interface with CPU-NPU shared memory access.
CTO Li Aijun says the upgrades will support embedded, edge, and cloud inference for models such as MoE and edge-scale large models.
Founded in 2014, Intellifusion has invested heavily in edge computing chips and has already shipped five generations of NPUs. In 2023, it launched its DeepEdge10 platform, targeting scenarios from IoT to intelligent computing centers.
Now, the company is placing its biggest bet yet on inference.
“Most inventions in the U.S. stay in labs,” said Chen. “But in China, the value is in large-scale implementation. AI inference chips will become the core infrastructure enabling AI to reshape all hardware—from glasses to robots—over the next five years.”
Chen believes that by linking data, algorithms, and chip development through China’s vast application scenarios, Intellifusion can drive a “data flywheel” of continuous innovation. He sees AI inference chips as China’s opportunity to gain a foothold in the Fourth Industrial Revolution.
“Our biggest asset isn’t chips. It’s our team,” he said. “With the right DNA, we’ll overcome challenges—from supply chains to ecosystems—and continue building a globally competitive inference chip company.”
消息,Moonpay推出面向AI代理与用户的稳定币借记卡Moonagents Card,该卡基于Mastercard网络运行,由...
2 Hyperliquid早期贡献者Loracle增持CL空单1363消息,Hyperliquid早期贡献者Loracle最近增持CL空单1,363.64枚,约合1,010,862.41美元,持仓规模达到...
3 2026年4月加密风险投资降至6.59亿美元,创消息,2026年4月,加密风险投资资金降至6.59亿美元,为2024年以来的最低月度总额,较3月的26亿...
4 Moonpay推出虚拟Mastercard稳定币卡,支持A消息,Moonpay推出了Moonagents卡,这是一款虚拟的Mastercard产品,允许AI代理和用户直接使用稳定币...
5 Matrixport关联地址(子地址1):ETH多单由消息,Matrixport关联地址的ETH多单已由亏转盈。该地址的盈亏情况为:从亏损781,764.97美元转为盈...
6 ZEC最大空头:CL空单增持12437.76枚消息,ZEC最大空头CL空单近期增持12,437.76枚,约合1,227,262.32美元,持仓规模达到16,406,905.40美元...
7 分析师:比特币4月保持12%涨幅,标普50消息,比特币在4月份结束时价格超过76,000美元,保持了近12%的月度涨幅。然而,标普500指数在...
8 受伊朗战争冲击,英国工厂成本上涨与交消息,受伊朗战争冲击,英国工厂面临成本上涨与交付延误加剧。调查显示,受霍尔木兹海峡...
9 日本或进行了第二轮干预,日元下跌主趋消息,分析师Justin Low评估日元汇率波动,指出日本可能进行了第二轮干预,日元下跌的主趋势...
10 法巴银行:中东冲突对日本消费品价格影消息,法国巴黎银行经济学家表示,中东冲突对日本消费品价格的影响仍然有限。他们指出,...
成都来彰科技 蜀ICP备2025134723号-1
资讯来源互联网,如有版权问题请联系管理员删除。