MiroThinker 1.5 Delivers Trillion-Scale Performance with 30B Model
TL;DR
In the realm of artificial intelligence, MiroMind has launched the MiroThinker 1.5, a model with just 30 billion parameters that promises advanced research capabilities comparable to trillion-parameter models at a significantly lower cost.
What is MiroThinker 1.5?
In the realm of artificial intelligence, MiroMind has launched the MiroThinker 1.5, a model with just 30 billion parameters that promises to offer advanced research capabilities comparable to models with trillions of parameters, at a significantly lower cost. This innovation aims to meet a growing demand for economical and efficient AI agents.
How Does MiroThinker 1.5 Stand Out?
The model stands out due to its **reasoning capabilities** that exceed those of models like Kimi K2 with 1 trillion parameters, showing a significant reduction in inference costs. While the Kimi K2 incurs significantly higher costs on API calls, MiroThinker 1.5 provides inference at just $0.07 per call.
Reduced Risk of Hallucinations
One of the major challenges with AI models is the phenomenon of **hallucinations**, where AI provides incorrect answers with confidence. MiroThinker 1.5 mitigates this risk through what is called **scientist mode**, which involves a verification and investigation cycle. This allows the model to propose hypotheses and seek external evidence before reaching conclusions.
Performance on Benchmarks
MiroThinker 1.5 performed well on a benchmark called **BrowseComp-ZH**, outperforming its 1 trillion parameter competitor with a score of 69.8. This notable performance reflects the model's efficiency in web research tasks.
Extended Tool Usage
With the capability to perform up to **400 tool calls** per session and support a context of 256,000 tokens, MiroThinker 1.5 is poised for executing complex tasks. This functionality allows for detailed workflows, such as gathering and synthesizing information.
Innovation in Training
Another innovative detail of MiroThinker 1.5 is its **Time-Sensitive Training Sandbox**, which creates more realistic training conditions. The model only interacts with available data up to a certain point in time, preventing future information from affecting its reasoning.
Practical Considerations for Implementation
For IT teams, MiroThinker requires a significant amount of GPU memory, yet it still ensures compatibility. The model can be easily integrated into vLLM servers with API endpoints compatible with OpenAI.
Future Perspectives for AI
The arrival of MiroThinker 1.5 highlights a shift in the industry focus toward a more interactive understanding of AI, rather than just increasing the number of parameters. MiroMind bets that efficiency in tool interactions can be more beneficial for practical applications than mere scalability by model size. This approach could revolutionize how companies utilize AI, retreating from high costs and complexities.
Content selected and edited with AI assistance. Original sources referenced above.


