Mistral: Mixtral 8x7B Instruct
Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion...
Anyone in the Space can @-mention Mistral: Mixtral 8x7B Instruct with the team's shared context — pooled credits, one chat, one memory.
Starter is free forever — 1 Space, 100 credits/month, 1 MCP. No card.
Specifications
- Provider
- mistralai
- Category
- llm
- Context length
- 32,768 tokens
- Max output
- 16,384 tokens
- Modalities
- text
- License
- proprietary
- Released
- 2023-12-10
Pricing
- Input
- $0.54/Mtok
- Output
- $0.54/Mtok
- Model ID
mistralai/mixtral-8x7b-instruct
Per-token prices show what the model costs upstream. On Switchy your team draws from one shared org credit pool — one plan, one balance for everyone.
Team cost calculator
5 seats · 80 msgs/day
Switchy meters this against your org's shared credit pool — one plan, one balance for everyone.
Providers
| Provider | Context | Input | Output | P50 latency | Throughput | 30d uptime |
|---|---|---|---|---|---|---|
| mistralai | 33k | $0.54/Mtok | $0.54/Mtok | — | — | — |
Performance
Benchmarks
Works well with
Top MCPs
Compatibility data comes from first-party telemetry; once we have enough co-usage signal, top MCPs for this model will appear here.