Connect Your Agents

Wire up your full agent fleet — any provider, any framework, any model — through a single gateway.

For: Developers, Platform Engineers, DevOps
Last updated: 2026-03-25 • v4.9.0

Whatever providers you use, whatever models, wherever they're hosted — you provision them in Liberty, point them at the gateway, and the gateway does its job. Liberty holds the truth. The gateway enforces the chain. Your agents don't know the difference.

This product governs fleets of agents. The Gateway Quickstart walks through a single agent for simplicity, but every production deployment involves multiple agents across multiple providers. This page is where that starts.

One Gateway, Many Agents
  Claude Code ──┐
                │
  OpenAI Agent ─┤                          ┌── Anthropic API
                ├──▶ d3cipher Gateway ─────┤── OpenAI API
  Custom Agent ─┤    (stamps everything)   ├── Ollama (local)
                │                          ├── vLLM on RunPod
  Theia Agent ──┘                          └── Azure OpenAI
                

Prerequisites

Before connecting agents, you need a running gateway with at least one agent provisioned. If you haven't done this yet, complete the Gateway Quickstart first — specifically Steps 0–7 (signup through first stamp).

You should have:

A typical fleet looks like this after provisioning:

lockstock-gateway start
  Agents: 6 registered in Liberty
    CLAUDE_D3CIPHER → agent_0a387038_claude-d3cipher
    CLAUDE_MEM → agent_c5c4b49c_claude-mem
    CLAUDE_VSCODIUM → agent_93676ad1_claude-vscodium
    COUNTERPARTY_DEMO → agent_8764139f_counterparty_demo
    CUSTOMER_TEST_AGENT → agent_bee0a037_customer-test-agent
    THEIA_CLAUDE_AGENT → agent_b6e506a9_theia-claude-agent (Theia)

  Starting LockStock...
    Gateway:     registry.gitlab.com/.../gateway:v4.9.0
    Theia:       registry.gitlab.com/.../theia:v4.9.0
    Admin-tools: registry.gitlab.com/.../admin-tools:v4.9.0
    Agents:      6
      CLAUDE_D3CIPHER → https://api.anthropic.com
      CLAUDE_MEM → https://api.poe.com
      CLAUDE_VSCODIUM → https://api.poe.com
      THEIA_CLAUDE_AGENT → https://api.anthropic.com
    Port:        4000
    Layer 4:     enabled (envelope encryption)
    Layer 5:     enabled (MLS end-to-end encryption)

  LockStock running!
    Gateway:    http://localhost:4000/healthz
    Theia IDE:  http://localhost:3030

Mixed modes in one fleet: agents route through the gateway to different upstream providers (Anthropic, Poe/OpenAI, local Ollama), Theia runs inside the container stack. All are stamped, audited, and governed through the same chain.


Any Provider, Any Model, Any Infrastructure

The gateway doesn't care which LLM provider your agents use. It doesn't parse model responses or make assumptions about the provider's API format. If a service speaks HTTP and accepts an API key, the gateway sits in front of it.

This means you can route agents to:

The gateway has a default upstream URL (set in Liberty as LOCKSTOCK_UPSTREAM_URL). For per-agent routing to different providers, each agent stores its own upstream URL and API key in Liberty:

Per-agent upstream configuration in Liberty
# Default upstream (used when no per-agent upstream is set)
liberty add LOCKSTOCK_UPSTREAM_URL "https://api.anthropic.com"

# Per-agent upstream URLs
liberty add LOCKSTOCK_UPSTREAM_CLAUDE_AGENT "https://api.anthropic.com"
liberty add LOCKSTOCK_UPSTREAM_OLLAMA_LOCAL "http://localhost:11434"
liberty add LOCKSTOCK_UPSTREAM_CUSTOM_MODEL "https://my-runpod.run.app/v1"

# Per-agent API keys (the real upstream credentials)
liberty add LOCKSTOCK_APIKEY_CLAUDE_AGENT "sk-ant-api03-your-anthropic-key"
liberty add LOCKSTOCK_APIKEY_OLLAMA_LOCAL ""       # intentional — local models don't need auth
liberty add LOCKSTOCK_APIKEY_CUSTOM_MODEL "your-runpod-api-key"

# Restart to pick up changes
lockstock-gateway stop && lockstock-gateway start

When you launch an agent, your startup automation reads these values from Liberty and passes them to the agent via HTTP headers:

The mental model: your agent talks to the gateway. The gateway reads the headers, strips them, injects the real credentials, and forwards to the upstream provider. The upstream provider sees a normal authenticated request. Your agent never needs to know which provider it's actually talking to.

No env files on disk. Liberty encrypts every upstream URL and API key to your hardware. The gateway reads Liberty at startup, generates ephemeral config in /dev/shm (tmpfs), and shreds it after container launch. Nothing touches persistent storage. Learn more about Liberty →


Upstream Routing & the Passthrough Token

When an agent sends a request to the gateway, the gateway looks up that agent's upstream URL in Liberty and forwards the request there. The agent doesn't need to know which provider it's talking to — it just talks to the gateway.

But some SDKs validate the API key format before sending the request. The Anthropic SDK, for example, requires a key that starts with sk-ant-. If your agent is routed through the gateway to a non-Anthropic provider (or if the gateway injects the real key from Liberty), the SDK would reject a key that doesn't match its expected format.

The solution is the gateway passthrough token:

sk-ant-api03-gateway-passthrough-lockstock

This is a gateway passthrough token — a syntactically valid key that satisfies the SDK's format check. It never reaches the upstream provider. Here's what happens:

  1. The SDK sees sk-ant-* and passes format validation
  2. Your agent's real upstream key travels in the X-D3cipher-Provider-Key header
  3. The gateway reads X-D3cipher-Provider-Key, ignores the passthrough token, and forwards the real key to the upstream provider

The SDK is happy (valid key format). The gateway gets the real credentials via the header. The upstream provider gets a properly authenticated request. The real API key is read from Liberty at launch time and passed only through the header — never hardcoded, never in a config file.

This is not a "dummy key." The passthrough token is doing real architectural work — it satisfies SDK validation, signals the gateway to perform credential injection, and ensures the real API key never appears in your agent's process environment. It's a deliberate part of the zero-trust design.


Path A: Theia Recommended

If you're running collaborative multi-agent workloads — especially workrooms, cross-gateway, or cross-org collaboration — Theia is the recommended path. It's a container-managed agent environment that ships with the gateway stack.

lockstock-gateway start brings up three containers: the gateway, Theia, and admin-tools. Theia agents are fully managed — MCP configs are auto-generated, identity is provisioned from Liberty, and the agent appears in the dashboard immediately.

Getting Started with Theia

1. Provision a Theia agent:

lockstock-gateway provision --name THEIA_AGENT

2. Start the stack (Theia starts automatically):

lockstock-gateway stop && lockstock-gateway start

3. Access Theia in your browser:

# Same host:
http://localhost:3030

# Different host / container / VM:
http://<gateway-host>:3030

4. Generate MCP config for the agent:

lockstock-gateway mcp-config --agent THEIA_AGENT --dir /path/to/project

That's it. The agent is stamped on every request, has wallet access through the MCP proxy, and can participate in workrooms with other agents on the same gateway or across organizations.

Why Theia? Theia is where the product is heading. It provides three collaboration modes that external clients don't get out of the box:

  • Workrooms — named collaboration spaces where multiple agents share context through stamped channels. Agents join, announce intent, share updates, and coordinate work — all audited on the chain.
  • Intra-gateway collaboration — agents on the same gateway see each other's activity, avoid file conflicts through advisory locks, and request sync when one agent pushes changes the others need.
  • Cross-gateway and cross-org — agents on different gateways, potentially in different organizations, collaborate through MLS-encrypted workrooms. Both sides get full audit trails. Neither side can read the other's chain — but both can verify it.

Path B: Bring Your Own Client Any Framework

For teams with existing agent infrastructure. Your agents keep running where they are — you just point them at the gateway instead of directly at the provider.

The universal pattern is three things:

  1. Base URL: http://localhost:4000 (same host) or http://<gateway-host>:4000 (different host, container, or VM)
  2. Identity header: X-D3cipher-Agent: <agent_id>
  3. API key: from Liberty (not hardcoded in client config)

Gateway URL: All examples below use localhost:4000. If your agent runs on a different host, container, or VM from the gateway, replace localhost with your gateway's hostname or IP.

Claude Code (Anthropic SDK)

Claude Code uses the Anthropic SDK, which requires an API key starting with sk-ant-. All configuration comes from Liberty — you read the values at launch time and pass them as environment variables and headers.

1. Read your agent's configuration from Liberty:

# Agent identity (provisioned in the Gateway Quickstart)
AGENT_ID=$(liberty show LOCKSTOCK_AGENT_MY_AGENT)

# Per-agent API key (the real upstream credential)
APIKEY=$(liberty show LOCKSTOCK_APIKEY_MY_AGENT)

# Per-agent upstream URL
UPSTREAM=$(liberty show LOCKSTOCK_UPSTREAM_MY_AGENT)

# MCP config path (generated by lockstock-gateway start or mcp-config)
MCP_CONFIG=$(liberty show LOCKSTOCK_MCP_MY_AGENT)

2. Launch Claude Code through the gateway:

Anthropic upstream (key starts with sk-ant-)
# When your upstream IS Anthropic, the real key passes SDK validation directly.
# X-D3cipher-Upstream tells the gateway where to forward.
ANTHROPIC_BASE_URL=http://localhost:4000 \
ANTHROPIC_API_KEY="$APIKEY" \
ANTHROPIC_CUSTOM_HEADERS=$'X-D3cipher-Agent: '"$AGENT_ID"$'\nX-D3cipher-Upstream: '"$UPSTREAM" \
claude --mcp-config "$MCP_CONFIG"
Non-Anthropic upstream (OpenAI, Ollama, vLLM, etc.)
# When your upstream is NOT Anthropic, the real key fails sk-ant-* validation.
# Use the passthrough token for SDK validation; send the real key
# via X-D3cipher-Provider-Key. The gateway uses the provider key
# and ignores the passthrough token.
ANTHROPIC_BASE_URL=http://localhost:4000 \
ANTHROPIC_API_KEY=sk-ant-api03-gateway-passthrough-lockstock \
ANTHROPIC_CUSTOM_HEADERS=$'X-D3cipher-Agent: '"$AGENT_ID"$'\nX-D3cipher-Upstream: '"$UPSTREAM"$'\nX-D3cipher-Provider-Key: '"$APIKEY" \
claude --mcp-config "$MCP_CONFIG"

Headers are newline-separated. ANTHROPIC_CUSTOM_HEADERS uses $'\n' (bash ANSI-C quoting) to separate multiple headers. The SDK splits on newlines, not commas.

3. Verify it's working:

Claude Code starts normally. Every request is stamped by the gateway. Check the dashboard to see stamps appearing under your agent's ID, or run:

lockstock-audit last --agent MY_AGENT

OpenAI Python SDK

The OpenAI Python SDK supports custom base URLs and default headers. Point it at the gateway and include your agent identity header.

1. Install the SDK:

pip install openai

2. Complete script — ready to run:

connect_openai.py
import subprocess
from openai import OpenAI

def liberty_show(key):
    """Read a value from Liberty vault."""
    return subprocess.run(
        ["liberty", "show", key],
        capture_output=True, text=True, check=True
    ).stdout.strip()

# Read all configuration from Liberty (not hardcoded, not in env files)
agent_id = liberty_show("LOCKSTOCK_AGENT_MY_AGENT")
api_key  = liberty_show("LOCKSTOCK_APIKEY_MY_AGENT")
upstream = liberty_show("LOCKSTOCK_UPSTREAM_MY_AGENT")

# Point the SDK at the gateway instead of the provider directly
client = OpenAI(
    base_url="http://localhost:4000/v1",
    api_key=api_key,
    default_headers={
        "X-D3cipher-Agent": agent_id,
        "X-D3cipher-Upstream": upstream,
    },
)

# Make a request — the gateway stamps it, forwards to your upstream
response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "user", "content": "Hello from a governed agent!"}
    ],
)

print(response.choices[0].message.content)
print(f"\nAgent: {agent_id}")
print("Request stamped and audited through d3cipher gateway.")

3. Run it:

python connect_openai.py

The response comes back normally. The gateway stamped the request and response transparently — the SDK doesn't know it's being governed.

OpenAI Node.js SDK

Same pattern as Python. The OpenAI Node.js SDK accepts baseURL and defaultHeaders in the constructor.

1. Install the SDK:

npm install openai

2. Complete script — ready to run:

connect_openai.mjs
import OpenAI from "openai";
import { execSync } from "child_process";

// Read all configuration from Liberty
const libertyShow = (key) => execSync(`liberty show ${key}`).toString().trim();

const agentId  = libertyShow("LOCKSTOCK_AGENT_MY_AGENT");
const apiKey   = libertyShow("LOCKSTOCK_APIKEY_MY_AGENT");
const upstream = libertyShow("LOCKSTOCK_UPSTREAM_MY_AGENT");

// Point the SDK at the gateway
const client = new OpenAI({
  baseURL: "http://localhost:4000/v1",
  apiKey: apiKey,
  defaultHeaders: {
    "X-D3cipher-Agent": agentId,
    "X-D3cipher-Upstream": upstream,
  },
});

// Make a request — stamped and audited transparently
const response = await client.chat.completions.create({
  model: "gpt-4",
  messages: [
    { role: "user", content: "Hello from a governed agent!" },
  ],
});

console.log(response.choices[0].message.content);
console.log(`\nAgent: ${agentId}`);
console.log("Request stamped and audited through d3cipher gateway.");

3. Run it:

node connect_openai.mjs

curl

For quick testing or shell scripts. This is the raw HTTP request the gateway expects — every SDK example above produces something equivalent to this.

Complete commands — ready to copy and paste:

OpenAI-compatible upstream
curl http://localhost:4000/v1/chat/completions -H "Content-Type: application/json" -H "Authorization: Bearer $(liberty show LOCKSTOCK_APIKEY_MY_AGENT)" -H "X-D3cipher-Agent: $(liberty show LOCKSTOCK_AGENT_MY_AGENT)" -H "X-D3cipher-Upstream: $(liberty show LOCKSTOCK_UPSTREAM_MY_AGENT)" -d '{"model":"gpt-4","messages":[{"role":"user","content":"Hello from a governed agent!"}]}'
Anthropic upstream
curl http://localhost:4000/v1/messages -H "Content-Type: application/json" -H "x-api-key: $(liberty show LOCKSTOCK_APIKEY_MY_AGENT)" -H "anthropic-version: 2023-06-01" -H "X-D3cipher-Agent: $(liberty show LOCKSTOCK_AGENT_MY_AGENT)" -H "X-D3cipher-Upstream: $(liberty show LOCKSTOCK_UPSTREAM_MY_AGENT)" -d '{"model":"claude-sonnet-4-20250514","max_tokens":256,"messages":[{"role":"user","content":"Hello from a governed agent!"}]}'

Both return the provider's normal JSON response. The gateway stamped and audited the exchange transparently.

Any HTTP Client (Python httpx / requests)

For custom agents, data pipelines, or any code that makes HTTP requests. The gateway is a transparent proxy — any HTTP client works.

Complete script with httpx:

connect_httpx.py
import subprocess
import httpx

def liberty_show(key):
    """Read a value from Liberty vault."""
    return subprocess.run(
        ["liberty", "show", key],
        capture_output=True, text=True, check=True
    ).stdout.strip()

# Read all configuration from Liberty
agent_id = liberty_show("LOCKSTOCK_AGENT_MY_AGENT")
api_key  = liberty_show("LOCKSTOCK_APIKEY_MY_AGENT")
upstream = liberty_show("LOCKSTOCK_UPSTREAM_MY_AGENT")

# Send request through the gateway
response = httpx.post(
    "http://localhost:4000/v1/chat/completions",
    headers={
        "Content-Type": "application/json",
        "Authorization": f"Bearer {api_key}",
        "X-D3cipher-Agent": agent_id,
        "X-D3cipher-Upstream": upstream,
    },
    json={
        "model": "gpt-4",
        "messages": [
            {"role": "user", "content": "Hello from a governed agent!"}
        ],
    },
    timeout=60.0,
)

# Check the response
response.raise_for_status()
data = response.json()
print(data["choices"][0]["message"]["content"])
print(f"\nAgent: {agent_id}")
print(f"Status: {response.status_code}")
print("Request stamped and audited through d3cipher gateway.")

With requests (same pattern):

connect_requests.py
import subprocess
import requests as req

def liberty_show(key):
    return subprocess.run(
        ["liberty", "show", key],
        capture_output=True, text=True, check=True
    ).stdout.strip()

agent_id = liberty_show("LOCKSTOCK_AGENT_MY_AGENT")
api_key  = liberty_show("LOCKSTOCK_APIKEY_MY_AGENT")
upstream = liberty_show("LOCKSTOCK_UPSTREAM_MY_AGENT")

response = req.post(
    "http://localhost:4000/v1/chat/completions",
    headers={
        "Content-Type": "application/json",
        "Authorization": f"Bearer {api_key}",
        "X-D3cipher-Agent": agent_id,
        "X-D3cipher-Upstream": upstream,
    },
    json={
        "model": "gpt-4",
        "messages": [
            {"role": "user", "content": "Hello from a governed agent!"}
        ],
    },
    timeout=60,
)

response.raise_for_status()
data = response.json()
print(data["choices"][0]["message"]["content"])

MCP Tools (Wallet Access)

The LockStock wallet exposes 14 tools through the Model Context Protocol (MCP) — artifact registration, chain verification, identity checks, and workroom collaboration. Your agent connects to the wallet through the gateway's MCP proxy, which stamps Authenticate on each connection.

Generate a per-agent MCP config:

lockstock-gateway mcp-config --agent MY_AGENT --dir /path/to/project

This creates a .mcp-MY_AGENT.json file in the specified directory:

.mcp-MY_AGENT.json
{
  "mcpServers": {
    "lockstock-wallet": {
      "type": "sse",
      "url": "http://localhost:4000/mcp/sse?agent=agent_xxxxxxxx_my-agent"
    }
  }
}

Remote gateway? If your gateway runs on a different host from your agent, the generated URL will say localhost:4000 and won't connect. Edit the JSON to replace localhost with your gateway's hostname or IP before passing it to your agent.

Pass it to Claude Code:

claude --mcp-config /path/to/project/.mcp-MY_AGENT.json

The agent now has access to wallet tools: lockstock_get_identity, lockstock_register_artifact, lockstock_verify_chain, lockstock_authenticate, workroom_join, and more. All tool calls are routed through the gateway — one stamping path, one audit trail.

Theia agents get this automatically. If you're using Path A (Theia), MCP configs are generated and wired up during lockstock-gateway start. This section is for Path B agents that need manual MCP configuration.


Verify It's Working

Don't stop at /healthz. A healthy gateway doesn't mean your agent is actually being stamped. Follow this full verification sequence. Replace localhost with your gateway's hostname if the gateway runs on a different host.

1Gateway is up

curl http://localhost:4000/healthz
# Expected: {"status":"ok","version":"4.7.0"}

2Send a real request through the gateway

Use any of the framework examples above, or the curl command:

curl http://localhost:4000/v1/chat/completions -H "Content-Type: application/json" -H "Authorization: Bearer $(liberty show LOCKSTOCK_APIKEY_MY_AGENT)" -H "X-D3cipher-Agent: $(liberty show LOCKSTOCK_AGENT_MY_AGENT)" -H "X-D3cipher-Upstream: $(liberty show LOCKSTOCK_UPSTREAM_MY_AGENT)" -d '{"model":"gpt-4","messages":[{"role":"user","content":"ping"}]}'

3Confirm the stamp appeared

Check the dashboard or use the CLI:

lockstock-audit last --agent MY_AGENT
# Look for: agent_id matches, sequence incremented, timestamp is recent

4Verify the chain is real (optional)

The stamp response includes a state_hash. You can verify it against the server to confirm the chain is cryptographically valid — not just HTTP proxying:

curl "https://api.d3cipher.ai/v1/verify/$(liberty show LOCKSTOCK_AGENT_MY_AGENT)?hash=STATE_HASH" \
  -H "Authorization: Bearer $(liberty show LOCKSTOCK_ADMIN_KEY)"
# Expected: {"valid":true,"sequence":N,"credential_model":"AAAA"}

All four steps pass? Your agent is fully governed. Every request is stamped, audited, and cryptographically verifiable. The agent doesn't know it's being governed — that's the point.


Troubleshooting

connection refused on localhost:4000: The gateway isn't running. Start it with lockstock-gateway start. If it was running, check docker ps — the container may have exited.

Gateway returns 400 "Missing X-D3cipher-Agent header": Your request is reaching the gateway but doesn't include the identity header. Add X-D3cipher-Agent: <agent_id> to your request headers. In SDKs, this goes in default_headers or ANTHROPIC_CUSTOM_HEADERS.

SDK rejects the API key format: Use the gateway passthrough token sk-ant-api03-gateway-passthrough-lockstock for SDKs that validate key format (like the Anthropic SDK). See Upstream Routing & the Passthrough Token above.

MCP tools not available in Claude Code: Restart Claude Code after generating or updating the MCP config. Verify the config path: liberty show LOCKSTOCK_MCP_MY_AGENT should point to the .mcp-MY_AGENT.json file. Check that the file exists and contains the correct SSE URL.

Agent not appearing in dashboard: Confirm the agent is sending requests through the gateway (check the base URL) and that the X-D3cipher-Agent header contains the correct agent ID (not the agent name). The ID looks like agent_xxxxxxxx_my-agent.

Gateway returns 503: The gateway can't reach the d3cipher cloud or the circuit breaker has tripped. Check lockstock-gateway status and the dashboard for details. If the circuit breaker tripped, click Unlock in the dashboard to restore access.


Next Steps