MiniMax's new open M2.5 and M2.5 Lightning near state-of-the-art while costing 1/20th of Claude Opus 4.6

MiniMax's new open M2.5 and M2.5 Lightning near state-of-the-art while costing 1/20th of Claude Opus 4.6
Binance



Thank you for reading this post, don't forget to subscribe!

Chinese AI startup MiniMax, headquartered in Shanghai, has sent shockwaves through the AI industry today with the release of its new M2.5 language model in two variants, which promises to make high-end artificial intelligence so cheap you might stop worrying about the bill entirely.

It's also said to be "open source," though the weights (settings) and code haven't been posted yet, nor has the exact license type or terms. But that's almost beside the point given how cheap MiniMax is serving it through its API and those of partners.

For the last few years, using the world’s most powerful AI was like hiring an expensive consultant—it was brilliant, but you watched the clock (and the token count) constantly. M2.5 changes that math, dropping the cost of the frontier by as much as 95%.

By delivering performance that rivals the top-tier models from Google and Anthropic at a fraction of the cost, particularly in agentic tool use for enterprise tasks, including creating Microsoft Word, Excel and PowerPoint files, MiniMax is betting that the future isn't just about how smart a model is, but how often you can afford to use it.

Indeed, to this end, MiniMax says it worked "with senior professionals in fields such as finance, law, and social sciences" to ensure the model could perform real work up to their specifications and standards.

This release matters because it signals a shift from AI as a "chatbot" to AI as a "worker". When intelligence becomes "too cheap to meter," developers stop building simple Q&A tools and start building "agents"—software that can spend hours autonomously coding, researching, and organizing complex projects without breaking the bank.

In fact, MiniMax has already deployed this model into its own operations. Currently, 30% of all tasks at MiniMax HQ are completed by M2.5, and a staggering 80% of their newly committed code is generated by M2.5!

As the MiniMax team writes in their release blog post, "we believe that M2.5 provides virtually limitless possibilities for the development and operation of agents in the economy."

Technology: sparse power and the CISPO breakthrough

The secret to M2.5’s efficiency lies in its Mixture of Experts (MoE) architecture. Rather than running all of its 230 billion parameters for every single word it generates, the model only "activates" 10 billion. This allows it to maintain the reasoning depth of a massive model while moving with the agility of a much smaller one.

To train this complex system, MiniMax developed a proprietary Reinforcement Learning (RL) framework called Forge. MiniMax engineer Olive Song stated on the ThursdAI podcast on YouTube that this technique was instrumental to scaling the performance even while using the relatively small number of parameters, and that the model was trained over a period of two months.

Forge is designed to help the model learn from "real-world environments" — essentially letting the AI practice coding and using tools in thousands of simulated workspaces.

"What we realized is that there's a lot of potential with a small model like this if we train reinforcement learning on it with a large amount of environments and agents," Song said. "But it's not a very easy thing to do," adding that was what they spent "a lot of time" on.

To keep the model stable during this intense training, they used a mathematical approach called CISPO (Clipping Importance Sampling Policy Optimization) and shared the formula on their blog.

This formula ensures the model doesn't over-correct during training, allowing it to develop what MiniMax calls an "Architect Mindset". Instead of jumping straight into writing code, M2.5 has learned to proactively plan the structure, features, and interface of a project first.

State-of-the-art (and near) benchmarks

The results of this architecture are reflected in the latest industry leaderboards. M2.5 hasn't just improved; it has vaulted into the top tier of coding models, approaching Anthropic's latest model, Claude Opus 4.6, released just a week ago, and showing that Chinese companies are now just days away from catching up to far better resourced (in terms of GPUs) U.S. labs.

Here are some of the new MiniMax M2.5 benchmark highlights:

SWE-Bench Verified: 80.2% — Matches Claude Opus 4.6 speeds

BrowseComp: 76.3% — Industry-leading search & tool use.

Multi-SWE-Bench: 51.3% — SOTA in multi-language coding

BFCL (Tool Calling): 76.8% — High-precision agentic workflows.

On the ThursdAI podcast, host Alex Volkov pointed out that MiniMax M2.5 operates extremely quickly and therefore uses less tokens to complete tasks, on the order $0.15 per task compared to $3.00 for Claude Opus 4.6.

Breaking the cost barrier

MiniMax is offering two versions of the model through its API, both focused on high-volume production use:

M2.5-Lightning: Optimized for speed, delivering 100 tokens per second. It costs $0.30 per 1M input tokens and $2.40 per 1M output tokens.

Standard M2.5: Optimized for cost, running at 50 tokens per second. It costs half as much as the Lightning version ($0.15 per 1M input tokens / $1.20 per 1M output tokens).

In plain language: MiniMax claims you can run four "agents" (AI workers) continuously for an entire year for roughly $10,000.

For enterprise users, this pricing is roughly 1/10th to 1/20th the cost of competing proprietary models like GPT-5 or Claude 4.6 Opus.

Model

Input

Output

Total Cost

Source

Qwen 3 Turbo

$0.05

$0.20

$0.25

Alibaba Cloud

deepseek-chat (V3.2-Exp)

$0.28

$0.42

$0.70

DeepSeek

deepseek-reasoner (V3.2-Exp)

$0.28

$0.42

$0.70

DeepSeek

Grok 4.1 Fast (reasoning)

$0.20

$0.50

$0.70

xAI

Grok 4.1 Fast (non-reasoning)

$0.20

$0.50

$0.70

xAI

MiniMax M2.5

$0.15

$1.20

$1.35

MiniMax

MiniMax M2.5-Lightning

$0.30

$2.40

$2.70

MiniMax

Gemini 3 Flash Preview

$0.50

$3.00

$3.50

Google

Kimi-k2.5

$0.60

$3.00

$3.60

Moonshot

GLM-5

$1.00

$3.20

$4.20

Z.ai

ERNIE 5.0

$0.85

$3.40

$4.25

Baidu

Claude Haiku 4.5

$1.00

$5.00

$6.00

Anthropic

Qwen3-Max (2026-01-23)

$1.20

$6.00

$7.20

Alibaba Cloud

Gemini 3 Pro (≤200K)

$2.00

$12.00

$14.00

Google

GPT-5.2

$1.75

$14.00

$15.75

OpenAI

Claude Sonnet 4.5

$3.00

$15.00

$18.00

Anthropic

Gemini 3 Pro (>200K)

$4.00

$18.00

$22.00

Google

Claude Opus 4.6

$5.00

$25.00

$30.00

Anthropic

GPT-5.2 Pro

$21.00

$168.00

$189.00

OpenAI

Strategic implications for enterprises and leaders

For technical leaders, M2.5 represents more than just a cheaper API. It changes the operational playbook for enterprises right now.

The pressure to "optimize" prompts to save money is gone. You can now deploy high-context, high-reasoning models for routine tasks that were previously cost-prohibitive.

The 37% speed improvement in end-to-end task completion means the "agentic" pipelines valued by AI orchestrators — where models talk to other models — finally move fast enough for real-time user applications.

In addition, M2.5’s high scores in financial modeling (74.4% on MEWC) suggest it can handle the "tacit knowledge" of specialized industries like law and finance with minimal oversight.

Because M2.5 is positioned as an open-source model, organizations can potentially run intensive, automated code audits at a scale that was previously impossible without massive human intervention, all while maintaining better control over data privacy, but until the licensing terms and weights are posted, this remains just a moniker.

MiniMax M2.5 is a signal that the frontier of AI is no longer just about who can build the biggest brain, but who can make that brain the most useful—and affordable—worker in the room.



Source link