LLMmeta-llama

Meta: Llama 4 Scout

Llama 4 Scout 17B Instruct (16E) is a mixture-of-experts (MoE) language model developed by Meta, activating 17 billion parameters out of a total of 109B. It supports native multimodal input...

Specifications

Provider
meta-llama
Category
llm
Context length
327,680 tokens
Max output
16,384 tokens
Modalities
text, image
License
proprietary
Released
2025-04-05

Pricing

Input
$0.08/Mtok
Output
$0.30/Mtok
Model ID
meta-llama/llama-4-scout

Team cost calculator

Estimated monthly spend
$2.57
17.6M tokens / month
5 seats · 80 msgs/day

Providers

ProviderContextInputOutputP50 latencyThroughput30d uptime
meta-llama328k$0.08/Mtok$0.30/Mtok

Performance

Performance snapshots are collected daily. Check back after the next ingestion run.

Benchmarks

Public benchmark scores are not available yet for this model. Check back after the next ingestion run.

Works well with

Top MCPs

Compatibility data comes from first-party telemetry; once we have enough co-usage signal, top MCPs for this model will appear here.

How Switchy teams use it

Not enough Spaces have used this model yet to share anonymised team stats. We wait for at least 50 distinct Spaces per week before publishing any aggregate.

Starter prompts

Starter prompts for this model will land here soon.
Data last verified just now.Sources aggregated hourly to weekly. See docs/architecture/model-directory.md.