Free access — up to 10k actions/month, no credit card required. Sign up free →

Runtime governance for AI agents

The governance layer for AI agents that take real actions

When your agent sends emails, moves money, or modifies data, the stakes are too high to ship without oversight. Arden gives you policy enforcement, human-in-the-loop approval, and a complete audit trail — for every action, in every session.

Policy enforcement Guardrails Human-in-the-loopAudit trail
agent.py
import ardenpy as arden
arden.configure(api_key="arden_live_...")
# That's it. Your agent code is unchanged.
# Every tool call is now intercepted —
# enforced, logged, and routed for human
# approval when your policies require it.

Works with any Python AI agent framework

OpenAI Agents SDKAnthropic ClaudeLangChainLlamaIndexCrewAIAutoGen

Every call, evaluated before it runs

Every tool call from every agent goes through Arden before it executes. No policy configured? It passes through automatically and is logged. Add policies in the dashboard when you're ready.

LangChain AgentCrewAI AgentCustom AgentARDENemail.sendallowdb.deleteblockdb.queryallowstripe.issue_refundreview

Observability

Every action. Every decision. Fully visible.

Arden logs every tool call your agent makes — whether it was allowed, blocked, or sent for approval — and captures token usage from every LLM call. Full visibility from the moment you call configure(). No extra setup. No blind spots.

session · conv_8f3a2c
support-agent · live
14:32:01query_databaseallow
query: "all customer records"
no policy configured
14:32:03send_emailblock
to: "reports@lume-analytics.co"
domain policy · external recipient
14:33:18issue_refundpending
amount: $500.00 · order: FF-4210
awaiting human approval
14:35:44issue_refundallow
amount: $49.99 · order: FF-1042
refund policy · amount ≤ $60

When a tool is pending · Slack notification fires

Arden
Arden2:33 PM
Awaiting approval

stripe.issue_refund requires review

Amount$500.00
Customercus_abc123
Sessionconv_8f3a2c
ApproveDeny

Approve from Slack

When a policy requires human review, Arden fires a Slack notification with full context. One click approves or denies — no dashboard login needed.

Audit trail from day one

Every tool call is logged automatically — even before you add a single policy. Know exactly what your agent did, when, and with what arguments.

Session replay

Tag runs with a session ID and replay every action in a conversation — invaluable for debugging misbehaving agents and answering customer complaints.

Policy coverage gaps

Actions logged as 'no policy configured' show you exactly which tools need guardrails. Build your policy coverage incrementally based on real agent behavior.

Token usage & cost governance

See what your agents cost.
Enforce limits before they exceed it.

Arden automatically captures token usage from every LLM call — no instrumentation needed. Cost visibility is built in from day one, with policies to enforce budget limits before they become a problem.

Input costs more than you think

System prompt + tool schemas2,400 tokens
Conversation history1,800 tokens
Tool results injected back900 tokens
Model output580 tokens

Example: single agent turn, GPT-4o

Automatic capture

LangChain, CrewAI, and OpenAI Agents SDK are auto-patched at configure() time. Token usage captured with zero code changes.

Cost breakdown

See estimated spend per agent, broken down by model, day, and session. Spot which model or agent is driving cost — and why.

Budget enforcement

Set spend limits per session or agent. Arden can block or escalate when a cost threshold is crossed — before a runaway agent drains your budget.

Integrate in minutes

Drop Arden into your existing agent with three lines of code. Configure policies in the dashboard — no redeployment needed.

1

Install

One pip install. Use an arden_test_ key in development, arden_live_ in production.

$ pip install ardenpy
arden.configure(api_key="...")
2

Run your agent as-is

For LangChain, CrewAI, and OpenAI Agents SDK, configure() intercepts every tool call automatically - your agent code is unchanged.

# No changes needed to your agent
agent = create_react_agent(llm, tools)
3

Set policies in the dashboard

Configure rules per tool — conditions, thresholds, human approval requirements. Changes take effect immediately without touching your code.

allowstripe.refund < $50
reviewstripe.refund ≥ $50
blockdb.delete_*

Works with your existing stack

Native integrations for LangChain, CrewAI, and the OpenAI Agents SDK. Wrap all your tools — Arden enforces only the ones you configure policies for.

agent.py
$ pip install ardenpy
import ardenpy as arden
arden.configure(api_key="arden_live_...")
# Wrap your tools — protect every call
safe_refund = arden.guard_tool("stripe.refund", issue_refund)
safe_email  = arden.guard_tool("comms.send_email", send_email)
safe_delete = arden.guard_tool("db.delete_record", delete_record)
# Agent calls them normally — Arden intercepts first
result = safe_refund(150.0, customer_id="cus_abc")
# → allow · block · or held for human approval

Works with any custom agent — no framework dependency. Wrap each function individually with guard_tool().

See it in action

Built for agents that take real actions

Any agent that can affect the real world benefits from runtime guardrails and human-in-the-loop approval.

Customer support agents

Prevent agents from accessing sensitive PII or making unauthorized account changes without human sign-off.

Finance & payment agents

Block transactions above thresholds. Route high-value operations to a human reviewer before anything executes.

Sales & outreach agents

Enforce rules on pricing and messaging — quotes outside approved ranges are blocked or escalated automatically.

Internal copilots

Guard against accidental data deletion, schema mutations, or unauthorized infrastructure changes from internal AI tools.

Frequently asked questions

Can't find the answer you're looking for? Reach out to the team.

What Python AI frameworks does Arden support?
Arden works with any Python AI agent framework. LangChain, CrewAI, and the OpenAI Agents SDK are auto-patched at configure() time — every tool call is intercepted with no code changes to your agent. For plain Python agents without a framework, use guard_tool() to wrap individual functions.
Do I need to pick which tools to protect?
No. For LangChain, CrewAI, and OpenAI Agents SDK, configure() intercepts every tool call automatically — you don't choose which tools to protect. Tools with no policy configured pass through and are logged. You add policies in the dashboard when you want to enforce rules or require human approval, giving you a full audit trail from day one.
How does policy configuration work?
Policies are configured per tool name in the Arden dashboard (app.arden.sh). You can set rules like 'always allow', 'always block', or 'require human approval when amount > 100'. No code changes are needed when you update a policy — changes take effect immediately.
Does Arden support human-in-the-loop (HITL) approval?
Yes — human-in-the-loop approval is a core feature. When a policy marks a tool call as requiring review, Arden pauses execution and routes it to a reviewer in the dashboard. The action only proceeds if a human approves it. You can wait synchronously, poll asynchronously, or receive a webhook when a decision is made.
How much latency does Arden add?
Each policy check is a lightweight HTTP request to Arden's API. For most deployments this adds under 100ms. For tool calls that are immediately allowed, the overhead is minimal. Human-approval flows pause until a reviewer acts — latency there depends on your team's response time.
What happens if the Arden API is unavailable?
By default, guard_tool() raises an ArdenError if the policy engine cannot be reached. Your agent will not execute guarded tools blindly. You can configure retry_attempts and timeout in configure() to tune this behavior.
Is the SDK open source?
Yes — ardenpy is open source on GitHub. You can inspect exactly what the SDK sends to the policy engine and how decisions are applied. The policy engine itself runs on Arden's infrastructure.
What kinds of AI actions should require human approval?
Any action that is high-value, irreversible, or touches sensitive data: processing refunds above a threshold, sending external emails or messages, deleting records, modifying production infrastructure, or making API calls to third-party services. Arden lets you configure human-in-the-loop approval per tool with optional conditions.
What is the difference between LLM guardrails and AI agent guardrails?
LLM guardrails typically filter model inputs and outputs — preventing harmful text generation. AI agent guardrails go further: they control what actions an agent can actually execute in the real world, like API calls, database writes, or financial transactions. Arden focuses on agentic guardrails — intercepting tool calls before they run, not just filtering model outputs.
How does Arden track token usage and LLM costs?
Arden automatically captures token usage from every LLM call — no instrumentation needed. For LangChain, CrewAI, and the OpenAI Agents SDK, configure() patches the framework at startup and records prompt tokens, completion tokens, and estimated cost for every model call in the background. The dashboard shows cost broken down by model, day, and session.
Which AI frameworks have automatic token usage tracking?
LangChain, CrewAI, and the OpenAI Agents SDK are auto-instrumented — token usage is captured with zero code changes. For any other framework or custom agent loop, call arden.log_token_usage(model, prompt_tokens, completion_tokens) after each LLM call to log it manually.
Can I set budget limits on my AI agents?
Yes. You can configure spend limits per session or per agent in the dashboard. When a cost threshold is crossed, Arden can block further tool calls or escalate to a human reviewer — preventing a runaway agent from exhausting your budget before you notice.
How do I track LLM costs by session or conversation?
Call arden.set_session(session_id) once at the start of each conversation or request. Every tool call and token usage record made in that context is tagged with the session ID. The dashboard lets you drill into individual sessions to see exactly which tools ran, which model calls were made, and what they cost.
Can I get Slack notifications when my AI agent takes an action?
Yes. Arden sends a Slack message whenever a tool call requires human approval — including the tool name, arguments, agent, and session ID. You configure a channel ID in the dashboard. No Slack app install required.
Can I approve or deny AI agent actions from Slack?
Yes. Each Slack notification includes Approve and Deny buttons. Clicking one resolves the pending action immediately — the agent either resumes or stops. You never need to open the Arden dashboard to handle approvals.

Secure your AI agents before they go to production

Start with full visibility. Add enforcement when you're ready. No redeployment needed — ever.