Varun Pratap Bhardwaj
infrastructure· Part of Qualixar

SLM MCP Hub

The World's First MCP Gateway That Learns

One hub process. Every MCP server. Every AI client. SLM MCP Hub federates 400+ tools through 3 meta-tools, recovers 150K tokens of context per session, and shares cache and cost telemetry across Claude Code, Cursor, Windsurf, and VS Code. Pair it with SuperLocalMemory and the hub learns from every tool call.

The Problem

The Context Window Tax

Every AI coding session spawns its own MCP server processes. Five Claude Code sessions with 36 MCPs means 180 OS processes eating ~9 GB of RAM. Every session also loads ~150K tokens of tool definitions into the context window before you type a word. On a 200K-context model, 75% is gone before the agent thinks. And every session starts from zero — no shared cache, no cost tracking, no learning, no coordination.

180
OS processes for 5 sessions × 36 MCP servers (≈9 GB RAM)
How It Works

Key Capabilities

01

3 Meta-Tools, 430+ Tools

Instead of loading 400+ tool schemas into every session, clients get 3 meta-tools: hub__search_tools, hub__call_tool, hub__list_servers. The hub routes to the right server on demand.

02

79% Process Reduction

One hub process for all MCP servers. 5 sessions × 36 MCPs collapses from 180 processes (~9 GB RAM) to 37 processes (~1.9 GB RAM). Sessions connect over HTTP instead of spawning 36 child processes each.

03

150K Tokens Saved Per Session

Tool schemas are discovered on demand, not preloaded. Recover 75% of a 200K context window that was being consumed by tool definitions before the agent even started.

04

Federated Cache + Cost Intelligence

Every connected client shares the same cache, cost ledger, and trace log. Pair the hub with SuperLocalMemory and it learns which tool calls to cache, skip, or redirect based on past outcomes.

05

One Config, Every IDE

Configure MCPs once in the hub. Claude Code, Cursor, Windsurf, VS Code, and any MCP-compatible client share the same tool set via a single HTTP endpoint. No per-IDE config duplication.

Evidence
79%
Process Reduction
~7 GB
RAM Savings (5 sessions)
150K
Tokens Saved / Session
3
Meta-Tools
AGPL-3.0
License
0
Cloud Dependencies

Get Started

$ pip install slm-mcp-hub && slm-hub start
From the Qualixar Suite