# Inception: Mercury 2

Provider: inception  
Category: llm  
Model ID: `inception/mercury-2`

Mercury 2 is an extremely fast reasoning LLM, and the first reasoning diffusion LLM (dLLM). Instead of generating tokens sequentially, Mercury 2 produces and refines multiple tokens in parallel, achieving...

## Specs

- Context length: 128000 tokens
- Max output: 50000 tokens
- Modalities: text
- Released: 2026-03-04

## Pricing

- Input: $0.25 per million tokens
- Output: $0.75 per million tokens

## Providers

- **inception** — ctx 128000, input $0.25/M, output $0.75/M

---
Last verified: 2026-04-23T23:46:29.618Z  
Canonical URL: https://switchy.build/models/mercury-2