LLMqwen
Qwen: Qwen VL Max
Qwen VL Max is a visual understanding model with 7500 tokens context length. It excels in delivering optimal performance for a broader spectrum of complex tasks.
Anyone in the Space can @-mention Qwen: Qwen VL Max with the team's shared context — pooled credits, one chat, one memory.
Starter is free forever — 1 Space, 100 credits/month, 1 MCP. No card.
Specifications
- Provider
- qwen
- Category
- llm
- Context length
- 131,072 tokens
- Max output
- 32,768 tokens
- Modalities
- text, image
- License
- proprietary
- Released
- 2025-02-01
Pricing
- Input
- $0.52/Mtok
- Output
- $2.08/Mtok
- Model ID
qwen/qwen-vl-max
Per-token prices show what the model costs upstream. On Switchy your team draws from one shared org credit pool — one plan, one balance for everyone.
Team cost calculator
Estimated monthly spend
$17.39
17.6M tokens / month
5 seats · 80 msgs/day
5 seats · 80 msgs/day
Switchy meters this against your org's shared credit pool — one plan, one balance for everyone.
Providers
| Provider | Context | Input | Output | P50 latency | Throughput | 30d uptime |
|---|---|---|---|---|---|---|
| qwen | 131k | $0.52/Mtok | $2.08/Mtok | — | — | — |
Performance
Performance snapshots are collected daily. Check back after the next ingestion run.
Benchmarks
Works well with
Top MCPs
Compatibility data comes from first-party telemetry; once we have enough co-usage signal, top MCPs for this model will appear here.
How Switchy teams use it
Not enough Spaces have used this model yet to share anonymised team stats. We wait for at least 50 distinct Spaces per week before publishing any aggregate.