Mistral AI
Efficiency champions and pioneers of Sparse Mixture-of-Experts.
2024-07
Mistral Large 2
Proprietary Specs: 123B Params
Flagship model designed for code, math, and reasoning.
2023-12
Mixtral 8x7B
Sparse MoE Specs: 47B (MoE)
First high-performance open MoE model.
2023-09
Mistral 7B
Dense Decoder Specs: 7B Params
Set new standard for small production-grade models.