Google DeepMind

Inventors of the Transformer, pioneering multimodal Gemini systems.

2025-11

Gemini 3 Pro / Flash

Omni
Specs: 1M token context

Latest Google flagship multimodal LLM with hyper-efficient Flash variants.

2025-09

VaultGemma / Nested Learning

Privacy & Continual learning
Specs: Research Breakthrough

Launch of the largest privacy-centric open model and the Nested Learning paradigm.

2025-03

Gemini 2.5 Pro

Reasoning
Specs: Thinking / 1M Context

Reasoning-focused Gemini model with long context and improved coding performance.

2025

Imagen 4

Image
Specs: Photorealistic

The latest text-to-image series with unprecedented fidelity and prompt adherence.

2024-12

Gemini 2.0 Flash

Multimodal
Specs: Low Latency / Native Tool Use

Experimental release of the first Gemini 2.0 family model, optimized for speed and tool calling.

2024-05

Veo

Video
Specs: Text-to-Long-Video

Generative video model for long-form content, later integrating native audio.

2024-02

Gemini 1.5 Pro

Long Context
Specs: 1M+ Context

Massive context window breakthrough with near-perfect needle retrieval.

2023-12

Gemini Pro / Ultra

Multimodal
Specs: v1.0

First major Google multimodal LLMs built from the start for diverse inputs.

2018-10

BERT

Encoder
Specs: 340M Params

Bidirectional Encoder Representations from Transformers; foundational for NLP.

2017-06

Transformer (Paper)

Architecture
Specs: Attention Is All You Need

The foundational architecture for all modern LLMs.