Mistral AI has launched Devstral 2, a groundbreaking new family of open-source coding AI models designed to advance software development through agentic coding capabilities and extensive context understanding. This release includes two primary variants: a flagship 123 billion parameter model and a smaller 24 billion parameter version, known as Devstral Small 2. Both models feature state-of-the-art performance benchmarks, particularly on the SWE-Bench Verified test, with the larger model achieving a score of 72.2% and the smaller model reaching 68%, making the smaller version the most powerful lightweight open-source coding model currently available.
The 123B parameter model is released under a modified MIT license that includes certain usage restrictions, notably preventing commercial use for companies with monthly revenues exceeding $20 million, while the 24B parameter Devstral Small 2 is offered under a more permissive Apache 2.0 license suitable for local deployment and commercial use. Both models offer exceptionally large context windows of 256,000 tokens, enabling them to handle complex codebases and long development sessions efficiently. The smaller model’s modest size allows it to run locally on typical consumer hardware, such as Nvidia RTX 4090 GPUs and MacBook Pro M3 Max machines with 48GB memory, delivering strong coding assistance for Rust, JavaScript, and other languages.
Accompanying the Devstral 2 models is Mistral Vibe, a new open-source command-line interface (CLI) tool that integrates deeply with these models to automate codebase management. Mistral Vibe enables developers to interact with code through natural language prompts directly from the terminal, allowing for refactoring, branch management, and codebase exploration akin to a senior developer’s workflow. It supports various popular environments and frameworks including Zed editor, Kilo Code, Cline, and several AI coding assistants like ClaudeCode and SWEAgent. Installation is simplified through standard tooling commands, and the CLI is also available under Apache 2.0 license.
Performance evaluations reveal that Devstral 2 competes effectively with much larger models in the field, offering nearly comparable results to prominent AI code agents at significantly reduced computational cost – reportedly 7x cheaper than Claude Sonnet. Early testing shows the capability of the smaller model to handle around 35 tokens per second on dual Nvidia 3090 GPUs with large context windows up to 128k tokens, emphasizing its efficiency and practicality for local development environments.
The European AI startup Mistral’s work on Devstral 2 marks a significant milestone for open AI development in Europe, positioning it competitively against global AI giants by delivering powerful, open, and accessible coding tools. Its approach to licensing and model accessibility promotes both high performance and broad availability, from decentralized GPU cloud deployment to offline edge inference on personal devices. The eco-system formed by Devstral 2’s API accessibility, local deployment options, and the Vibe CLI promises to reshape AI-driven software engineering productivity in 2025 and beyond.
In summary, Devstral 2 and its accompanying Vibe CLI tool offer:
– A flagship 123B parameter model scoring 72.2% on SWE-Bench Verified benchmarks with a 256K-token context window.
– A smaller 24B parameter model delivering state-of-the-art open-source performance for local use on normal consumer GPUs.
– Dual licensing strategies balancing open use and commercial restrictions.
– Mistral Vibe CLI enabling natural-language command-driven coding automation.
– Compatibility with numerous existing AI coding frameworks and tools.
– Availability via API and open-source repositories, with a focus on accessibility, cost-efficiency, and scalability.
This release is generating considerable excitement among developers for its potential to accelerate coding tasks, automate complex workflows, and democratize high-performance AI coding assistance.