Добавить в корзинуПозвонить
Найти в Дзене
Crynet.io

🚀 Just dropped: the GLM-4.5 tech report! This powerhouse Mixture-of-Experts (MoE) LLM packs 355 billion parameters (32 billion active) and

🚀 Just dropped: the GLM-4.5 tech report! This powerhouse Mixture-of-Experts (MoE) LLM packs 355 billion parameters (32 billion active) and has hybrid logic—think "deep thoughts" for tricky tasks and quick responses when it counts! 🤯 Key features: - Trained on a whopping 23 trillion tokens with search fine-tuning & RL through expert iterations. - Excels at agentic tasks, reasoning, and coding: — TAU-Bench: 70.1% ✔️ — AIME 24: 91.0% 🎉 — SWE-bench Verified: 64.2% 💻 - Despite its size, it's ranked #3 in overall metrics and #2 in agentic benchmarks among all models! 🥉🥈 - Two versions out: full-size GLM-4.5 (355B) and compact GLM-4.5-Air (106B)—both open to the community! 🌍 This is a game-changer for open LLMs—a hybrid champ that can reason, act, and code—all in one framework! 💪✨

🚀 Just dropped: the GLM-4.5 tech report! This powerhouse Mixture-of-Experts (MoE) LLM packs 355 billion parameters (32 billion active) and has hybrid logic—think "deep thoughts" for tricky tasks and quick responses when it counts! 🤯

Key features:

- Trained on a whopping 23 trillion tokens with search fine-tuning & RL through expert iterations.

- Excels at agentic tasks, reasoning, and coding:

— TAU-Bench: 70.1% ✔️

— AIME 24: 91.0% 🎉

— SWE-bench Verified: 64.2% 💻

- Despite its size, it's ranked #3 in overall metrics and #2 in agentic benchmarks among all models! 🥉🥈

- Two versions out: full-size GLM-4.5 (355B) and compact GLM-4.5-Air (106B)—both open to the community! 🌍

This is a game-changer for open LLMs—a hybrid champ that can reason, act, and code—all in one framework! 💪✨