LLMgoogle

Google: Gemma 4 26B A4B

Gemma 4 26B A4B IT is an instruction-tuned Mixture-of-Experts (MoE) model from Google DeepMind. Despite 25.2B total parameters, only 3.8B activate per token during inference — delivering near-31B quality at...

Specifications

Provider
google
Category
llm
Context length
262,144 tokens
Max output
Modalities
image, text, video
License
proprietary
Released
2026-04-03

Pricing

Input
$0.06/Mtok
Output
$0.33/Mtok
Model ID
google/gemma-4-26b-a4b-it

Team cost calculator

Estimated monthly spend
$2.48
17.6M tokens / month
5 seats · 80 msgs/day

Providers

ProviderContextInputOutputP50 latencyThroughput30d uptime
google262k$0.06/Mtok$0.33/Mtok

Performance

Performance snapshots are collected daily. Check back after the next ingestion run.

Benchmarks

Public benchmark scores are not available yet for this model. Check back after the next ingestion run.

Works well with

Top MCPs

Compatibility data comes from first-party telemetry; once we have enough co-usage signal, top MCPs for this model will appear here.

How Switchy teams use it

Not enough Spaces have used this model yet to share anonymised team stats. We wait for at least 50 distinct Spaces per week before publishing any aggregate.

Starter prompts

Starter prompts for this model will land here soon.
Data last verified just now.Sources aggregated hourly to weekly. See docs/architecture/model-directory.md.