Anthropic: Claude Sonnet 4.5 vs OpenAI: GPT-4.1

Side-by-side specs, pricing, and benchmarks. Pick a winner for your team's use case.

Use it in a Space

Spin up a Switchy Space with either model — your whole team @-mentions it with shared context, pooled credits, one memory.

Anthropic: Claude Sonnet 4.5

Provider
anthropic
Context
1000k
Input $/Mtok
$3.00
Output $/Mtok
$15.00
Max output
64000
Modalities
text, image, file

OpenAI: GPT-4.1

Provider
openai
Context
1048k
Input $/Mtok
$2.00
Output $/Mtok
$8.00
Max output
Modalities
image, text, file

Price delta

Anthropic: Claude Sonnet 4.5 is $1.00/Mtok more expensive than OpenAI: GPT-4.1 on input. Output: Anthropic: Claude Sonnet 4.5 is $7.00/Mtok more expensive than OpenAI: GPT-4.1.

Which to pick

Pick **Claude Sonnet 4.5** for everyday team chat where reasoning quality and tone consistency matter — drafting longer-form output, reviewing code in conversation, debating decisions with a teammate. Anthropic's mid-tier still edges GPT-4.1 on most general-purpose evals and refusal behaviour, and the price ($3 in / $15 out per Mtok) is in the same neighbourhood as GPT-4.1 ($2 / $8) once you account for output verbosity. Pick **GPT-4.1** when you need its 1M-token context — full-repo ingestion, long board packs, hours of transcripts in one turn — or when you specifically want OpenAI's tool-calling shape for an existing pipeline. The 5x context advantage is the deciding factor; on shorter turns the per-Mtok savings are real but small.
Data last verified 15 hours ago.Sources aggregated hourly to weekly. See docs/architecture/model-directory.md.