Anthropic: Claude Haiku 4.5 vs OpenAI: GPT-4.1

Side-by-side specs, pricing, and benchmarks. Pick a winner for your team's use case.

Use it in a Space

Spin up a Switchy Space with either model — your whole team @-mentions it with shared context, pooled credits, one memory.

Pricing
Anthropic: Claude Haiku 4.5OpenAI: GPT-4.1
Input $/Mtok$1.00 · $2.00
Output $/Mtok$5.00 · $8.00
Context window
Anthropic: Claude Haiku 4.5OpenAI: GPT-4.1
Anthropic: Claude Haiku 4.5200K tokens
OpenAI: GPT-4.11048K tokens

Bars use square-root scaling so a 1M-token window doesn't crush a 200K one.

Release timeline
Anthropic: Claude Haiku 4.5OpenAI: GPT-4.1
2025-10-15
2025-04-14
2025-03-15today

Anthropic: Claude Haiku 4.5

Provider
anthropic
Context
200k
Input $/Mtok
$1.00
Output $/Mtok
$5.00
Max output
64000
Modalities
image, text

OpenAI: GPT-4.1

Provider
openai
Context
1048k
Input $/Mtok
$2.00
Output $/Mtok
$8.00
Max output
Modalities
image, text, file

Price delta

Anthropic: Claude Haiku 4.5 is $1.00/Mtok cheaper than OpenAI: GPT-4.1 on input. Output: Anthropic: Claude Haiku 4.5 is $3.00/Mtok cheaper than OpenAI: GPT-4.1.

Which to pick

Pick **Claude Haiku 4.5** for cost-sensitive workloads where Anthropic's tone and refusal behaviour are familiar to the team. At $0.80 in per Mtok it is roughly two-and-a-half times cheaper than GPT-4.1 on input, and its 200k context is plenty for anything short of full-codebase ingestion. Pick **GPT-4.1** when you need its 1M-token context window — pasting an entire repo, a long board pack, or hours of meeting transcripts in one turn. Output quality is competitive on most short tasks; the deciding factor is usually "do I need more than 200k tokens of context here." If the answer is no, Haiku wins on price.
Data last verified 1 hour ago.Sources aggregated hourly to weekly. See docs/architecture/model-directory.md.