One MCP server instead of eight.
200 tool definitions in every context window. Rostr replaces them with four and a playbook that learns from your runs.
$npm install -g rostr-mcp$rostr init
Add to .cursor/mcp.json, reload.
Before / after
Without Rostr
- MCP servers loaded
- 8
- Tool schemas in context
- ~200
- Tokens burned on tools
- ~87k
- Memory across sessions
- none
With Rostr
- MCP servers loaded
- 1
- Tool schemas in context
- 4
- Tokens burned on tools
- ~4k
- Memory across sessions
- .mdc playbook
How it works
Discover
Reads your .cursor/mcp.jsonand plugin directories. Builds a roster of what's actually connected right now.
Plan
suggest_plan looks at the goal, your servers, and your run history. Returns a sequence with warnings where things have broken before.
Execute
The agent still calls Neon, Stripe, GitHub directly. Rostr doesn't proxy HTTP. It routes attention, not packets.
Learn
log_run records outcomes. Patterns get extracted. .cursor/rules/rostr.mdc gets rewritten so the next turn starts with context.
Tools
| name | when | what |
|---|---|---|
| list_roster | Start of a task | Connected servers, saved workflows, recent stacks |
| suggest_plan | Before multi-step work | Optimal sequence with failure warnings from history |
| log_run | After a run | Persists outcome, extracts patterns, rewrites .mdc |
| recall_playbook | Unfamiliar stack combo | Known patterns and success rates for that set of servers |
Pattern learning
Every logged run gets fed through rule-based extraction. Failure rates, ordering dependencies, timing quirks. No model calls, no cloud. SQLite and a rules file that Cursor already reads.
run_sql fails after fast runs — space stepscreate_branch → run_sql works in that orderNeon stack: 43% success over 7 runs
$npm install -g rostr-mcp && rostr init
Local only. MIT licensed. No account, no API keys, no telemetry.