Google: Gemini 2.5 Pro Preview 06-05 vs OpenAI: GPT-4.1

Side-by-side specs, pricing, and benchmarks. Pick a winner for your team's use case.

Use it in a Space

Spin up a Switchy Space with either model — your whole team @-mentions it with shared context, pooled credits, one memory.

Pricing
Google: Gemini 2.5 Pro Preview 06-05OpenAI: GPT-4.1
Input $/Mtok$1.25 · $2.00
Output $/Mtok$10.00 · $8.00
Context window
Google: Gemini 2.5 Pro Preview 06-05OpenAI: GPT-4.1
Google: Gemini 2.5 Pro Preview 06-051049K tokens
OpenAI: GPT-4.11048K tokens

Bars use square-root scaling so a 1M-token window doesn't crush a 200K one.

Release timeline
Google: Gemini 2.5 Pro Preview 06-05OpenAI: GPT-4.1
2025-06-05
2025-04-14
2025-03-15today

Google: Gemini 2.5 Pro Preview 06-05

Provider
google
Context
1049k
Input $/Mtok
$1.25
Output $/Mtok
$10.00
Max output
65536
Modalities
file, image, text, audio

OpenAI: GPT-4.1

Provider
openai
Context
1048k
Input $/Mtok
$2.00
Output $/Mtok
$8.00
Max output
Modalities
image, text, file

Price delta

Google: Gemini 2.5 Pro Preview 06-05 is $0.75/Mtok cheaper than OpenAI: GPT-4.1 on input. Output: Google: Gemini 2.5 Pro Preview 06-05 is $2.00/Mtok more expensive than OpenAI: GPT-4.1.

Which to pick

Pick **Gemini 2.5 Pro** when long context combined with low input price is what you're optimising for. Both ship 1M-token windows, but Gemini undercuts GPT-4.1 on input ($1.25 vs $2 per Mtok) and is closer on output ($10 vs $8). For bulk ingestion of long PDFs, multi-quarter transcripts, or large code dumps, Gemini's pricing scales better. Pick **GPT-4.1** when you specifically want OpenAI's tool-calling shape, function-calling consistency, or your downstream pipeline already speaks the OpenAI protocol. Output quality is broadly similar on general-purpose tasks; the deciding factor is usually ecosystem fit rather than benchmark deltas.
Data last verified 22 hours ago.Sources aggregated hourly to weekly. See docs/architecture/model-directory.md.