LangChain.js
Add policy enforcement and observability to LangChain.js and LangGraph chains via BaseCallbackHandler.
LangChain.js / LangGraph
Checkrd ships a BaseCallbackHandler subclass for @langchain/core. It hooks every LLM call, tool call, retriever call, and chain invocation in LangChain.js (and any third-party Runnable). Mirrors the Python adapter one-for-one — the same policy YAML works across both runtimes.
Install
npm install checkrd @langchain/coreQuickstart
import { initAsync } from "checkrd";
import { CheckrdCallbackHandler } from "checkrd/langchain";
import { ChatOpenAI } from "@langchain/openai";
const checkrd = await initAsync({
policy: "policy.yaml",
agentId: "research-agent",
});
const handler = new CheckrdCallbackHandler({
engine: checkrd.engine,
enforce: true,
agentId: "research-agent",
sink: checkrd.sink,
});
const llm = new ChatOpenAI({ model: "gpt-4o", callbacks: [handler] });
await llm.invoke("Tell me a joke");Per-call attach
If you don't want to register the handler on the LLM itself, attach it per-call via RunnableConfig:
await chain.invoke(input, { callbacks: [handler] });This pattern is preferred when one process serves multiple agents — each invocation gets its own handler bound to the right agentId.
Async chains
LangChain.js dispatches handler methods identically for .invoke() and .invoke() (which is async by default). The handler subclasses BaseCallbackHandler's async methods, so there's no sync/async split to think about.
What gets enforced
| LangChain event | Synthetic URL |
|---|---|
handleLLMStart | https://langchain.local/llm/{model} |
handleChatModelStart | https://langchain.local/chat_model/{model} |
handleToolStart | https://langchain.local/tool/{tool_name} |
handleRetrieverStart | https://langchain.local/retriever/{name} |
handleChainStart | https://langchain.local/chain/{name} |
agent: research-agent
default: allow
rules:
- name: deny-shell-tools
deny:
url: "langchain.local/tool/shell*"Observation mode
Set enforce: false to log denies without aborting:
new CheckrdCallbackHandler({
engine: checkrd.engine,
enforce: false, // observation mode — log only
agentId: "research-agent",
sink: checkrd.sink,
});Edge runtimes
The handler runs anywhere @langchain/core runs — Node, Bun, Deno, Cloudflare Workers, Vercel Edge. initAsync loads the WASM via fetch + WebAssembly.compile so no Node-only imports leak into the bundle.
Caveats
raiseErrorandawaitHandlersare set totrueon the handler. Do not override — if either is false, deny exceptions are swallowed and the request proceeds.- Token counts depend on the LLM provider. ChatOpenAI / ChatAnthropic populate them reliably; some local models do not.
- Streaming: per-token gating would 100x the eval rate. The first / last token boundaries are gated via
handleLLMStart/handleLLMEnd.