Mistral's Latest Update to Its Open Source Small Model Unveils Dramatic Improvements in AI Instruction-following

Published: 21 Jun 2025
AI frontrunner Mistral has amped up its game by introducing major refinements to its Small Model with the transition from 3.1 to 3.2.

On the forefront of AI innovation, French AI dynamo Mistral is continuously enhancing its tech contributions; this time, with the significant upgrade to its 24B parameter open source model - Mistral Small 3.2.

Announced hot on the heels of its own domestic AI-optimized cloud service, Mistral Compute, the latest release shows Mistral’s commitment to carving out a unique spot in the AI field with strategic improvements to the existing model. The company’s promptness in releasing an update barely three months after the release of Small 3.1 is a testament to its dedication to maintaining a seat at the high table of AI innovation.

The Small 3.2 version aims to deliver concentrated enhancements to specific behaviors such as instruction-following, output stability, and functional robustness. By keeping the underlying architectural details consistent with its predecessor, Small 3.2 is a delicate blend of stability and constant evolution.

A stalwart in the AI world, Small 3.1 had already etched its name in the books with its ability to offer full multimodal capabilities, a broad understanding of several languages, and impressive long-context processing. It quickly set itself apart from other models within the same parameter range, particularly when it came to efficient deployment and performance in diverse domains such as legal, medical, and technical fields.

What sets Small 3.2 apart is a refined focus. By prioritising improvements in behaviour and reliability, this version does not introduce new capabilities or architectural changes but emphasises creating an optimised and more efficient version of the existing product – an instance of Mistral’s astute strategy of prioritising efficiency and refinement.

Overall, the improvements are not just substantial but quantifiable. The company’s internal accuracy made a noteworthy leap, improving from 82.75% in Small 3.1 to 84.78% in Small 3.2. Similar gains were seen in performance across external datasets, where Arena Hard jumped from 19.56% to a whopping 43.10%. In addition, the model has become more reliable for developers imposing consistency on bounded responses.

With its enhanced iteration of the Small Model, Mistral has proven yet again that continuous innovation paired with strategic refinement can shape the future of AI technology.