L
LLMliquid

LiquidAI: LFM2-24B-A2B

LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active parameters per...

Anyone in the Space can @-mention LiquidAI: LFM2-24B-A2B with the team's shared context — pooled credits, one chat, one memory.

All models

Starter is free forever — 1 Space, 100 credits/month, 1 MCP. No card.

Specifications

Provider
liquid
Category
llm
Context length
32,768 tokens
Max output
Modalities
text
License
proprietary
Released
2026-02-25

Pricing

Input
$0.03/Mtok
Output
$0.12/Mtok
Model ID
liquid/lfm-2-24b-a2b

Per-token prices show what the model costs upstream. On Switchy your team draws from one shared org credit pool — one plan, one balance for everyone.

Team cost calculator

Estimated monthly spend
$1.00
17.6M tokens / month
5 seats · 80 msgs/day

Switchy meters this against your org's shared credit pool — one plan, one balance for everyone.

Providers

ProviderContextInputOutputP50 latencyThroughput30d uptime
liquid33k$0.03/Mtok$0.12/Mtok

Performance

Performance snapshots are collected daily. Check back after the next ingestion run.

Benchmarks

Public benchmark scores are not available yet for this model. Check back after the next ingestion run.

Works well with

Top MCPs

Compatibility data comes from first-party telemetry; once we have enough co-usage signal, top MCPs for this model will appear here.

How Switchy teams use it

Not enough Spaces have used this model yet to share anonymised team stats. We wait for at least 50 distinct Spaces per week before publishing any aggregate.

Starter prompts

Starter prompts for this model will land here soon.
Data last verified 14 hours ago.Sources aggregated hourly to weekly. See docs/architecture/model-directory.md.