Control what your AI can execute.
Runtime Guard monitors tool-use (shell, files, network) and enforces policy before actions execute or data leaves your device.
- Local-first monitoring
- Policy-based allow / block / approve
- Audit trail for every action
Validation phase: demo today, real enforcement shipping in v0.2.
Built for
Teams and builders running tool-using agents locally.
Builders running local agents
You run LLMs and agents locally. You need to know what they're doing with your files and network.
Crypto and automation power users
Your machine holds wallets, keys, and credentials. One rogue tool call is all it takes.
Teams shipping agentic products
You're building products with tool-using agents. You need runtime guardrails and audit logs.
What we block
Prompt injection to command execution
Crafted inputs that trick agents into running shell commands or scripts.
Secrets access (SSH keys, cookies, wallets)
Unauthorized reads of credential files, browser stores, and key material.
Suspicious outbound destinations
Network requests to untrusted or unrecognized domains and IPs.
Persistence attempts (startup, cron)
Writes to startup folders, cron jobs, or other boot-time execution paths.
Mass file reads / unusual traversal
Rapid enumeration of directories or reading files outside expected scope.
Self-spawning loops / runaway tooling
Agents that recursively spawn processes or enter unbounded execution.
How it works
Observe
Capture agent actions: commands, file reads/writes, outbound requests.
Score
Assign risk based on context and policy — injection, escalation, exfiltration patterns.
Control
Block, require approval, or allow. Log everything.
No marketing claims. We show logs, policies, and evidence.
See it in action
Run a demo scan to see how Runtime Guard monitors and controls agent behavior.
Frequently asked questions
What is Runtime Guard?
Runtime Guard monitors what AI agents do — shell commands, file access, network calls — and enforces security policies before actions execute. Think of it as a firewall for agent tool-use.
Does it upload my code or data?
No. Runtime Guard runs locally. It inspects agent actions on your machine and never sends file contents or source code to external servers.
What's the difference between Balanced and Strict mode?
Balanced mode blocks clearly dangerous actions but allows most normal operations. Strict mode requires approval for anything that touches sensitive files, network, or system commands.
Which AI agents does it work with?
Runtime Guard is agent-agnostic. It works with any tool-using agent that executes shell commands, writes files, or makes network requests — regardless of the LLM provider.
Is this ready for production?
We're currently in v0.1 demo phase. The demo scan shows how policy enforcement works. Real-time enforcement ships in v0.2. Join the waitlist for early access.
Is it free?
The demo is free. We'll offer a free tier for individual developers when v0.2 launches, with paid plans for teams and enterprise. See pricing for details.