# Qwen: Qwen-Max 

Provider: qwen  
Category: llm  
Model ID: `qwen/qwen-max`

Qwen-Max, based on Qwen2.5, provides the best inference performance among [Qwen models](/qwen), especially for complex multi-step tasks. It's a large-scale MoE model that has been pretrained on over 20 trillion...

## Specs

- Context length: 32768 tokens
- Max output: 8192 tokens
- Modalities: text
- Released: 2025-02-01

## Pricing

- Input: $1.04 per million tokens
- Output: $4.16 per million tokens

## Providers

- **qwen** — ctx 32768, input $1.04/M, output $4.16/M

---
Last verified: 2026-04-23T23:46:29.618Z  
Canonical URL: https://switchy.build/models/qwen-max