Agents
An agent is something the chat can call. LLMs (Claude, GPT, Gemini) and external tools (GitHub, Notion, custom MCPs) all show up under the same @mention syntax.
Mental model
Multi-participant chat assumes silence is the default. Switchy doesn't auto-respond to every message — that would burn credits and clutter the thread. An agent only acts when someone names it with @. This keeps a Space's chat readable and makes it cheap to keep open.
Built-in LLM agents
@claude— Anthropic Claude Sonnet 4.5 by default. Aliases:@sonnet,@opus,@haiku.@gpt-5,@gpt-4o,@gpt-4o-mini— OpenAI.@gemini,@gemini-pro— Google.@ai— uses whatever model the session is set to default to. Picked from a dropdown above the composer.
External tool agents (MCPs)
Connect a tool from Settings → MCP, bind it to a Space, then teammates can mention it the same way: @github, @notion, @your-custom-slug.
- Org admins manage the connector list. Space admins (Project OWNER/EDITOR or org OWNER/ADMIN) toggle which connectors are usable per Space.
- Mentioning an MCP that isn't bound to the current Space does nothing — the AI ignores it. The autocomplete only suggests bound MCPs.
- An MCP that's currently Offline still shows up in the menu, ghosted. You can mention it; the call fails politely and the AI replies with that context.
Chaining: combining MCPs and LLMs in one reply
Mention multiple agents in one message and Switchy stitches them. @github @claude what should I look at? calls @github first (with a search query derived from the message), feeds the truncated result into Claude's system prompt as a fenced JSON block, and Claude replies referencing the issues it just "saw".
- Hard cap of 3 MCP calls per message. Mention more, the rest are dropped.
- Each MCP result is truncated to 50 KB; the total across all results in one message is 150 KB.
- The respond job logs once per trigger as
ai.mcp_preflightso operators can tell when chained calls happened.
What this isn't
Agents aren't background workers. They don't poll for "something to do". They run only in reaction to a human mention; that mention is the entire trigger. (This is the A6 invariant — see the architecture docs.)
Agents aren't configured per-message. You don't set temperature, system prompt, or tools per call from chat. The session has a default model + the agent always has Switchy's memory + the team's MCPs available.
Next
- MCP — the protocol behind external agents.
- Connect external tools — set up GitHub, Notion, or a custom MCP.