-
Advertisement
Artificial intelligence

China’s AI arms race sees sector brace for major flagship model launch week

A ‘stealth’ model has emerged, while advancements by Alibaba’s Qwen-3.5 and Zhipu’s GLM-5 aim to spur domestic competition following releases by US heavyweights

Reading Time:3 minutes
Why you can trust SCMP
1
China’s AI sector is bracing for a monumental week, with a flurry of anticipated model releases. Photo: Shutterstock
Vincent Chow

China’s AI sector is bracing for a monumental week, with a flurry of new models – including a potent “stealth” contender – emerging as domestic tech giants prepare to unveil their flagship products.

The race to release new models ahead of the Lunar New Year holiday underscores the intense global competition between frontier companies for users’ attention amid a rapid acceleration of AI progress at the start of 2026, following high-profile releases from US heavyweights Anthropic and OpenAI.

On Sunday, a member of Alibaba Cloud’s model-development team issued pull requests – a developer’s proposal to add new code to a shared software project – on open-source developer platforms Hugging Face and GitHub for its next-generation family of models. Such platforms are online repositories where programmers can share, collaborate on, and manage software code, making it publicly accessible for use and modification.

Advertisement

The centrepiece of this new family is the much-anticipated Qwen-3.5, set to arrive almost a year after the release of the Hangzhou-based tech giant’s previous model generation, Qwen-3, which helped propel Qwen to become the most popular open-model family globally over the course of 2025 due to its strong performance, permissive licence and wide range of use cases.

Alibaba Cloud is the AI and cloud computing unit of Alibaba Group Holding, owner of the South China Morning Post.

Advertisement

Based on preliminary information disclosed as part of the pull requests, Qwen-3.5 will include two models – one at 9 billion parameters and the other at 35 billion parameters – with native multimodal support for the first time. Parameters are the variables encoding a model’s “intelligence” that are adjusted during training. Generally, a higher number of variables means a more powerful model, although it is also more computationally demanding. And multimodal support means the AI can understand and process different types of data, such as text, images and audio.

The two models will also feature the company’s next-generation architecture, which was first previewed in September in an experimental model called Qwen3-Next.

Advertisement
Select Voice
Select Speed
1.00x