Zhipu AI Releases GLM-5 Open-Source Model Trained on Huawei Chips

China's Zhipu AI released GLM-5, a 744-billion-parameter open-source model trained on Huawei Ascend chips, scoring 77.8% on SWE-bench under the MIT license.

Feb 27, 2026 - 16:46
Zhipu AI Releases GLM-5 Open-Source Model Trained on Huawei Chips
Research laboratory equipment with scientific instruments and digital displays in modern facility

China's Zhipu AI Drops 744-Billion-Parameter Open Model Trained on Domestic Chips

Zhipu AI, the Beijing-based artificial intelligence company backed by Tsinghua University, released GLM-5 on Thursday — a 744-billion-parameter mixture-of-experts model with 44 billion active parameters at any given inference pass, a 200,000-token context window, and a score of 77.8 percent on the SWE-bench Verified coding benchmark. The model is available under the MIT open-source license, meaning any developer or company worldwide can download, modify, and deploy it without licensing fees.

The release landed with considerable force in the AI community for two reasons beyond its raw capabilities. First, GLM-5 was trained entirely on Huawei Ascend chips — not NVIDIA H100s or A100s, which the US government has blocked from export to China since 2022. This confirms that China has successfully built competitive frontier AI training infrastructure without access to the world's most advanced semiconductor hardware.

Second, the MIT license means GLM-5 is immediately and freely available to any government, company, or developer anywhere in the world — including US competitors and adversaries — creating exactly the kind of proliferation scenario that US AI policymakers have been warning about for years.

What the Benchmarks Show

GLM-5's 77.8 percent score on SWE-bench Verified — a benchmark that tests AI systems on real-world software engineering tasks drawn from GitHub — edges out Google DeepMind's Gemini 3.1 Pro at 77.1 percent and performs comparably to Anthropic's Claude Opus 4.6 and OpenAI's GPT-5.3 Codex in code generation tasks. Performance on general language reasoning benchmarks like MMLU places it in the top tier of currently available models.

The mixture-of-experts architecture — where only a fraction of the model's parameters are activated for any given input — makes GLM-5 significantly more efficient to run than a dense model of comparable parameter count. This means the effective inference cost is much lower, making enterprise deployment financially viable even for organizations without access to large compute clusters.

According to Dr. Percy Liang, Director of the Center for Research on Foundation Models at Stanford University, "GLM-5 confirms that the export control strategy for slowing Chinese AI development has, at best, bought time. China now has the training infrastructure and the talent to compete at the frontier with or without US hardware."

Policy Implications in Washington

The GLM-5 release arrived in the same week that the Trump administration's Commerce Department was reviewing a new set of proposed restrictions on advanced AI model exports — a regulatory framework that would require licenses for US companies to share frontier model weights with foreign entities.

GLM-5 immediately complicated that framework. A Chinese lab has now released a frontier-quality model openly and freely, meaning any US export restriction on model weights would primarily disadvantage American open-source developers while doing little to restrict access to capable AI globally.

Senate Intelligence Committee Chairman Tom Cotton called the release "a direct challenge to American technological leadership" and demanded an emergency briefing from the National Security Council. The administration's AI policy team faces the uncomfortable question of whether tighter restrictions or faster domestic open-source development better serves American interests in a world where GLM-5 exists and is free to download.