High-end French-trained model. The best option for EU sovereignty with solid quality on agentic coding.
Which LLM in your box?
All supported models are open-source or shipped under a clear commercial licence. You can run several in parallel and switch any time.
All these models are open-weights, but trained by teams sitting in different jurisdictions. For a CISO, provenance matters as much as licence: sanctions, possible audits, and alignment compatible with your legal frame.
Good perf/cost trade-off when you don't need the Large. Apache 2.0 = no constraints.
Code-specialised, made by Mistral. Powers the Code Reviewer module for tech teams.
Excellent quality/size ratio on the M and L. Solid reasoning and writing across many languages.
The smaller sibling: fits on the S, solid performance for everyday tasks.
Meta's reference model: massive global adoption, mature ecosystem, but US jurisdiction.
Compact Llama 3: fits on the S, great for high-volume usage.
Multilingual audio transcription. Powers the Meeting Summarizer module.
2026 open-weights champion on LiveCodeBench. MoE architecture with sub-agent parallelism, requires the LMbox XL.
The most serious rival to Claude Sonnet on SWE-bench. Permissive MIT licence, MoE 671B / 37B active.
Alibaba's 2026 flagship. Solid multi-file refactoring, 256k context.
The Qwen 3.6 sweet spot that fits on a LMbox L. Excellent alternative to Llama 3 70B with less RAM.
Ultra-long-context specialist: 1 million tokens of input. To ingest a whole codebase in one query.
Excellent option for fine-tuning on your own data: MIT licence, no commercial restrictions.
Which model family for which customer?
We built a quality × sovereignty × cost matrix to make the decision explicit. From law firms to critical-infra datacenters, every profile has its answer.
See the decision matrixYou stay in control of what runs
A proprietary model to integrate?
You fine-tuned an internal model, or want to integrate one not on the list (Falcon, Phi, DeepSeek, etc.)? We add it to your box. Typical lead time: 2 weeks.
Discuss your model