Checkrd
Open Source

Know what your AI agents are doing.

Policy enforcement, kill switch, and full observability for every API call. One line of code. Zero PII stored.

main.py
import checkrd
checkrd.init(api_key="ck_live_...")
checkrd.instrument()  # patches OpenAI, Anthropic, Cohere, ...

from openai import OpenAI
client = OpenAI()

# Every LLM call is now policy-enforced and observable
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "..."}],
)
Open Source
233 KB WASM Core
62µs Overhead
Zero PII Stored
RFC 9421 Signed
OTLP Compatible

AI agents are running unsupervised

Companies are deploying agents to production with no way to see what they're doing, stop them if they go wrong, or prove they stayed within bounds.

No visibility

Your agents make hundreds of API calls. You can't see which endpoints, how often, or what's failing.

No guardrails

The only thing between a prompt-injected agent and your production Stripe API is the LLM's judgment.

No kill switch

When an agent goes rogue at 3 AM, your only option is to find the developer who deployed it.

One proxy layer. Complete control.

Checkrd sits between your agents and every external API they call. No agent code changes. No deployment changes.

Your Agent CodeCheckrd (wrap / instrument)External API

Every request is evaluated, logged, and controllable. Telemetry flows to your dashboard.

See everything

Every API call, every response code, every millisecond of latency. Structured telemetry with zero code changes.

Enforce policies

YAML rules at the network layer. Allow/deny by endpoint, method, rate limit, time of day. Not prompt engineering — real enforcement.

Kill it instantly

One click in the dashboard. Every outbound call stops. The safety net you need before deploying agents to production.

Three ways to integrate

Auto-instrument your AI SDKs, wrap any HTTP client, or send OpenTelemetry spans. Pick what fits your stack.

Auto-instrument3 lines
import checkrd

checkrd.init(api_key="ck_live_...")
checkrd.instrument()

# That's it. Every AI SDK call is now monitored.
Wrap HTTP clientAny API
from checkrd import wrap
import httpx

client = wrap(
    httpx.Client(),
    policy="policy.yaml",
)

response = client.get("https://api.stripe.com/v1/charges")
OpenTelemetry0 code changes
# Point your OpenTelemetry Collector at Checkrd.
# Zero code changes to your agents.

exporters:
  otlphttp:
    endpoint: "https://api.checkrd.io/v1/traces"
    headers:
      Authorization: "Bearer ck_live_..."

Works with

OpenAIAnthropicCohereGroqMistralTogether AIGoogle GenAI

Your data never touches our servers.

Checkrd stores only operational metadata: endpoint, method, status code, latency. Request bodies, prompts, completions, and API keys are never captured. Sensitive path segments are parameterized client-side before data leaves your machine.

Client-side only

PII parameterization happens in the SDK, on your machine, before any data is sent.

Compile-time enforced

Every telemetry field has a PII classification. CI fails if an unclassified field is added.

No scrubbing needed

We don't redact data after the fact. Sensitive data is structurally excluded from capture.

Open-source core. Cryptographic trust.

The proxy engine and SDK are open source. Telemetry batches are signed with Ed25519 via RFC 9421 HTTP Message Signatures. Policy evaluation runs in a WASM sandbox with zero I/O access.

WASM Sandbox

233 KB, 62µs eval, zero filesystem or network access

Ed25519 Signatures

RFC 8032 compliant, Wycheproof tested (150 vectors)

HTTP Signatures

RFC 9421 + RFC 9530 Content-Digest on every batch

DSSE Envelopes

Dead Simple Signing Envelopes for policy distribution

Start free. Scale when you're ready.

The open-source proxy is free forever. The hosted control plane starts at $49/month.

Free

$0/mo

5 agents, 100K events/mo

Team

$49/mo

50 agents, 1M events/mo, audit log

Enterprise

Custom

Unlimited, SSO, SLA, dedicated support

Start in under 5 minutes.

pip install checkrd