Anthropic: Claude Haiku 4.5 vs OpenAI: GPT-4o

Side-by-side specs, pricing, and benchmarks. Pick a winner for your team's use case.

Use it in a Space

Spin up a Switchy Space with either model — your whole team @-mentions it with shared context, pooled credits, one memory.

Pricing
Anthropic: Claude Haiku 4.5OpenAI: GPT-4o
Input $/Mtok$1.00 · $2.50
Output $/Mtok$5.00 · $10.00
Context window
Anthropic: Claude Haiku 4.5OpenAI: GPT-4o
Anthropic: Claude Haiku 4.5200K tokens
OpenAI: GPT-4o128K tokens

Bars use square-root scaling so a 1M-token window doesn't crush a 200K one.

Release timeline
Anthropic: Claude Haiku 4.5OpenAI: GPT-4o
2025-10-15
2024-05-13
2024-04-13today

Anthropic: Claude Haiku 4.5

Provider
anthropic
Context
200k
Input $/Mtok
$1.00
Output $/Mtok
$5.00
Max output
64000
Modalities
image, text

OpenAI: GPT-4o

Provider
openai
Context
128k
Input $/Mtok
$2.50
Output $/Mtok
$10.00
Max output
16384
Modalities
text, image, file

Price delta

Anthropic: Claude Haiku 4.5 is $1.50/Mtok cheaper than OpenAI: GPT-4o on input. Output: Anthropic: Claude Haiku 4.5 is $5.00/Mtok cheaper than OpenAI: GPT-4o.

Which to pick

Pick **Claude Haiku 4.5** for short, high-volume team chat. Its $0.80 in / $4 out per Mtok pricing undercuts GPT-4o ($2.50 / $10) by roughly 3x on both sides, and the 200k context exceeds GPT-4o's 128k. For triage, classification, and quick transforms it's the obvious cost play. Pick **GPT-4o** when the request involves vision (charts, photos, document layouts) or you want OpenAI's voice/audio modalities downstream. GPT-4o's multimodal handling is still ahead in mixed-media turns; if the chat is text-only and high-frequency, the price difference favours Haiku.
Data last verified 1 hour ago.Sources aggregated hourly to weekly. See docs/architecture/model-directory.md.