LiquidAI: LFM2.5-1.2B-Thinking (free)
LFM2.5-1.2B-Thinking is a lightweight reasoning-focused model optimized for agentic tasks, data extraction, and RAG—while still running comfortably on edge devices. It supports long context (up to 32K tokens) and is...
Anyone in the Space can @-mention LiquidAI: LFM2.5-1.2B-Thinking (free) with the team's shared context — pooled credits, one chat, one memory.
Starter is free forever — 1 Space, 100 credits/month, 1 MCP. No card.
Specifications
- Provider
- liquid
- Category
- llm
- Context length
- 32,768 tokens
- Max output
- —
- Modalities
- text
- License
- proprietary
- Released
- 2026-01-20
Pricing
- Input
- $0.00/Mtok
- Output
- $0.00/Mtok
- Model ID
liquid/lfm-2.5-1.2b-thinking:free
Per-token prices show what the model costs upstream. On Switchy your team draws from one shared org credit pool — one plan, one balance for everyone.
Team cost calculator
5 seats · 80 msgs/day
Switchy meters this against your org's shared credit pool — one plan, one balance for everyone.
Providers
| Provider | Context | Input | Output | P50 latency | Throughput | 30d uptime |
|---|---|---|---|---|---|---|
| liquid | 33k | $0.00/Mtok | $0.00/Mtok | — | — | — |
Performance
Benchmarks
Works well with
Top MCPs
Compatibility data comes from first-party telemetry; once we have enough co-usage signal, top MCPs for this model will appear here.