Alibaba is taking a step forward in developing open-source AI models with the introduction of Qwen3.5. The company claims that this new model can compete with, and in some cases outperform, established names such as GPT-5.2 and Claude 4.5 Opus in several areas.
This is according to SiliconANGLE. Qwen3.5 is available via Hugging Face and is released under an open-source license. With this, Alibaba is explicitly targeting developers and research institutions that want to work with the model themselves. The system can process very long prompts, up to 260,000 tokens, and can be scaled further with additional optimizations. This makes it suitable for complex applications such as extensive document analysis and code generation. In addition, it supports more than 210 languages and dialects and can also process images, including graphs and other visual data.
The architecture of Qwen3.5 is based on the so-called mixture-of-experts principle. Instead of one large neural network, the model uses multiple specialized networks, only a limited number of which are active per task. This significantly reduces the required computing power without compromising performance. Although the total model has nearly 400 billion parameters, only a fraction of these are used per prompt.
More efficient handling of context and memory
Alibaba implemented various technical refinements to further increase efficiency. An important part of this is the way the model handles attention mechanisms. Whereas traditional attention mechanisms quickly consume a lot of memory with long inputs, Qwen3.5 combines classic approaches with a lighter variant that requires less memory. This makes the model more scalable for applications with large amounts of context.
Qwen3.5 also uses a so-called gated delta network. This technique helps the model temporarily discard irrelevant information, making the learning process during training more efficient. Research by Nvidia Corp. and others has previously shown that this combination of techniques can reduce the hardware requirements for training large language models.
In internal tests, Alibaba compared Qwen3.5 with competing models on more than thirty benchmarks. These tests show that the model is particularly strong at following instructions and at complex reasoning tasks. The results do not show absolute dominance, but they do show that Qwen3.5 can compete with the top of the market and achieves better scores in specific scenarios.
Another striking feature is the focus on multimodality. According to Alibaba, Qwen3.5 outperforms its previous models, which were specifically designed for image analysis, in both visual reasoning and programming tasks that combine images and text. Alibaba is thus positioning Qwen3.5 as a versatile model for a wide range of AI applications.
With this release, Alibaba demonstrates that open models are not just an academic experiment, but also a serious alternative to commercial, closed AI systems. For developers and companies, Qwen3.5 offers a new option for integrating advanced AI functionality without being completely dependent on proprietary platforms.