LLMdeepseek
DeepSeek: R1 Distill Qwen 32B
DeepSeek R1 Distill Qwen 32B is a distilled large language model based on [Qwen 2.5 32B](https://huggingface.co/Qwen/Qwen2.5-32B), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). It outperforms OpenAI's o1-mini across various benchmarks, achieving new...
Specifications
- Provider
- deepseek
- Category
- llm
- Context length
- 32,768 tokens
- Max output
- 32,768 tokens
- Modalities
- text
- License
- proprietary
- Released
- 2025-01-29
Pricing
- Input
- $0.29/Mtok
- Output
- $0.29/Mtok
- Model ID
deepseek/deepseek-r1-distill-qwen-32b
Team cost calculator
Estimated monthly spend
$5.10
17.6M tokens / month
5 seats · 80 msgs/day
5 seats · 80 msgs/day
Providers
| Provider | Context | Input | Output | P50 latency | Throughput | 30d uptime |
|---|---|---|---|---|---|---|
| deepseek | 33k | $0.29/Mtok | $0.29/Mtok | — | — | — |
Performance
Performance snapshots are collected daily. Check back after the next ingestion run.
Benchmarks
Works well with
Top MCPs
Compatibility data comes from first-party telemetry; once we have enough co-usage signal, top MCPs for this model will appear here.
How Switchy teams use it
Not enough Spaces have used this model yet to share anonymised team stats. We wait for at least 50 distinct Spaces per week before publishing any aggregate.