Dev mode
Local HTTP proxy that runs the WASM policy engine. Iterate on policy YAML against real traffic in seconds — no Docker, no setup.
Dev mode
checkrd dev boots a local HTTP proxy that runs the same WASM policy engine the SDK loads in production. Point any HTTP client at it, traffic flows through the engine, decisions print to your terminal — no Docker, no service stack, no setup beyond a checkrd.yaml file.
This is the iteration loop for policy authoring. Edit checkrd.yaml, save, the engine reloads automatically, your next request reflects the change.
Quickstart
# 1. Drop a policy file in the current directory
cat > checkrd.yaml <<'EOF'
agent: dev-test
default: deny
rules:
- name: allow-openai
allow:
method: [POST]
url: api.openai.com/v1/chat/completions
EOF
# 2. Start the dev proxy
checkrd dev
# ✓ checkrd dev listening on http://127.0.0.1:8080
# policy: checkrd.yaml
# mode: forward-proxy (Host: header determines upstream)
# watch: on (hot-reload enabled)
# Ctrl-C to exit.
# 3. Hit it (in another terminal)
curl -x http://127.0.0.1:8080 https://api.openai.com/v1/chat/completions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{"model":"gpt-4","messages":[{"role":"user","content":"hi"}]}'
# allow POST https://api.openai.com/v1/chat/completions [allow-openai]
curl -x http://127.0.0.1:8080 https://api.anthropic.com/v1/messages
# deny GET https://api.anthropic.com/v1/messages [default] default deny — no rule matchedHow it works
crates/core is cdylib + rlib, so the CLI links it as a regular Rust dependency and calls PolicyEngine::evaluate_full directly. No wasmtime, no .wasm loading, no FFI — just the same Rust code path the SDK wrappers compile to WASM. Policies that pass dev will pass production.
┌─────────┐ HTTP ┌─────────────────────┐ HTTP ┌──────────────┐
│ curl / │────────▶│ checkrd dev │────────▶│ upstream │
│ agent │ │ ┌───────────────┐ │ │ (api.openai) │
└─────────┘ │ │ PolicyEngine │ │ └──────────────┘
│ │ (Rust core) │ │
│ └───────────────┘ │
│ ▲ hot-reload │
│ │ │
│ ┌──┴────────────┐ │
│ │ checkrd.yaml │ │
│ └───────────────┘ │
└─────────────────────┘Forward-proxy vs reverse-proxy
Forward-proxy mode (default)
Treat checkrd dev as an outbound proxy — set HTTP_PROXY=http://localhost:8080 in your agent's environment (or pass -x http://localhost:8080 to curl). The Host header in each request determines the upstream.
HTTP_PROXY=http://localhost:8080 \
HTTPS_PROXY=http://localhost:8080 \
python my_agent.pyThis is the closest match to how the production SDK intercepts traffic — the policy engine sits between your agent and every API it calls.
Reverse-proxy mode (--upstream)
Pin every request to a single upstream URL. Useful when you can't easily set proxy env vars (curl in a one-liner, browser dev tools, etc.) or when iterating on a policy for one specific API.
checkrd dev --upstream https://api.openai.com
# All requests to localhost:8080/* forward to api.openai.com/*
curl http://localhost:8080/v1/chat/completions ...Hot reload
By default, checkrd dev watches checkrd.yaml and reloads the engine on save. The reload is debounced 250 ms (text editors often write multiple times in quick succession) and is atomic — the previous policy stays in effect if the new one fails to compile.
# In one terminal
checkrd dev
# ✓ checkrd dev listening on http://127.0.0.1:8080
# In another, edit checkrd.yaml and save
# ✓ policy reloaded from checkrd.yaml
# A typo will keep the previous policy in effect:
# ✗ policy reload failed: could not parse checkrd.yaml as policy YAML: ...
# (keeping previous policy in effect)To disable for a one-shot debug session:
checkrd dev --watch=falseFlags reference
| Flag | Default | What |
|---|---|---|
--port | 8080 | Listen port |
--addr | 127.0.0.1 | Bind address. Use 0.0.0.0 to expose to your LAN (testing from a phone or container) — never in production |
--policy | ./checkrd.yaml | Policy YAML path |
--upstream | (none) | Pin all requests to this upstream (reverse-proxy mode) |
--watch | true | Hot-reload the policy on save |
Decision log format
Each request prints one line to stderr:
allow POST https://api.openai.com/v1/chat/completions [allow-openai]
deny GET https://api.anthropic.com/v1/messages [default] default deny — no rule matchedColumns: verdict, method, URL, [matched-rule], reason (deny only). Color when stderr is a TTY. Suppressed when piping (you'll get plain text into your log shipper).
This is intentionally human-format — pipe stdout (which carries the upstream's response) wherever you'd normally pipe traffic. Structured-log emission to stderr is a v0.2 feature; if you need that today, run checkrd dev under caddy run --watch or similar.
Scope (v0.1 vs roadmap)
This is the focused MVP — the boring parts that work everywhere. Not yet shipped, deferred deliberately:
- HTTPS termination with auto-generated cert (rcgen). Today, set
HTTPS_PROXYand let your agent's TLS happen end-to-end through the proxy. - TUI request feed (split-pane ratatui live view).
--mockfor canned upstream responses (offline demos).--inspectopening a websocket + DevTools-style decision viewer.--persist-to <dir>SQLite log of every decision for replay analysis.
Wrangler shipped its first dev mode without any of these too. We'll add them as customers ask.
Why not Docker?
Wrangler, Vercel dev, and Fly proxy all pair with a runtime stack that's hard to ship as a single binary. checkrd dev is one statically-linked Rust binary that embeds the policy engine — no Docker, no Postgres, no Redis, no service mesh. This is the durable advantage over the bigger SaaS-CLI dev modes: you brew install once and you're done.
When to use it vs. the SDK
Use checkrd dev when:
- Authoring a new policy and iterating on YAML.
- Debugging why a specific request is denied (log line shows the matched rule).
- Demoing the product to someone without setting up a real agent.
Use the SDK in your agent code when:
- Running for real.
- Capturing telemetry into the control plane.
- Exercising the kill switch / live policy reload from the dashboard.
The two share the same engine — dev is a wrapper that exposes it as an HTTP service.