LLManthropic

Anthropic: Claude Sonnet 4.5

Claude Sonnet 4.5 is Anthropic’s most advanced Sonnet model to date, optimized for real-world agents and coding workflows. It delivers state-of-the-art performance on coding benchmarks such as SWE-bench Verified, with...

Specifications

Provider
anthropic
Category
llm
Context length
1,000,000 tokens
Max output
64,000 tokens
Modalities
text, image, file
License
LicenseRef-Anthropic-Commercial
Released
2025-09-29

Pricing

Input
$3.00/Mtok
Output
$15.00/Mtok
Model ID
anthropic/claude-sonnet-4.5

Team cost calculator

Estimated monthly spend
$116.16
17.6M tokens / month
5 seats · 80 msgs/day

Verdict

Claude Sonnet 4.5 is the default model on Switchy because it wins the calm middle of the curve — strong coding, strong reasoning, not the cheapest token but never the expensive mistake. What we actually notice in Spaces: Sonnet doesn't need spoon-feeding. Hand it a three-file diff and it finds the failure mode; hand it a messy PRD and it turns it into a plan you can ship. It holds 200k tokens of context without getting wobbly past 100k the way some competitors do. Best for: code review and bug investigation; long PRD synthesis; legal and compliance drafting where "close enough" isn't; architecture discussions where being wrong has a blast radius; any task where you'd rather pay a bit more and not re-ask the question. Avoid for: throwaway one-liners (Haiku is cheaper and fast enough); high-volume classification where you're running millions of calls (use the cheapest thing that clears the bar). Pricing frame: at $3/Mtok in, $15/Mtok out, a 5-person team running 200 messages a day with 30% output ratio lands around $95-110/month. That's the cost of one Pro seat per person — for a team, not per seat.

Providers

ProviderContextInputOutputP50 latencyThroughput30d uptime
anthropic1000k$3.00/Mtok$15.00/Mtok

Performance

Performance snapshots are collected daily. Check back after the next ingestion run.

Benchmarks

Public benchmark scores are not available yet for this model. Check back after the next ingestion run.

Works well with

Top MCPs

How Switchy teams use it

Not enough Spaces have used this model yet to share anonymised team stats. We wait for at least 50 distinct Spaces per week before publishing any aggregate.

Starter prompts

Review a diff

Paste a unified diff and ask:

> Review this diff the way a senior engineer would. Flag bugs, risky
> logic, and anything that's going to confuse a reader six months from
> now. Don't comment on style — I have a linter for that.
Open in a Space →

Turn a PRD into a plan

> Here's a PRD. Break it into implementation phases. For each phase,
> list the files that will change, the database migrations, and the
> first test I should write. Call out anything in the PRD that's
> underspecified.
Open in a Space →

Write the commit message I should have written

> Here's the diff for my next commit. Write a commit message that
> follows the format I see in the last 5 commits in this repo. Lead
> with the why, not the what.
Open in a Space →

Reverse-engineer a schema

> I'm looking at this Prisma schema for a codebase I'm new to. Explain
> what each model is for, the invariants, and the 3-4 queries you'd
> expect the app to run most often.
Open in a Space →

Plan a refactor

> This file has grown to 800 lines and I hate touching it. Propose 3
> refactor options: surgical, middle-ground, and ambitious. For each,
> tell me what breaks and what it buys me.
Open in a Space →
Data last verified just now.Sources aggregated hourly to weekly. See docs/architecture/model-directory.md.