Local-First AI Security: What It Means
When your AI agent reads files, runs commands, and makes network requests on your machine, sending all that activity to a cloud service for analysis creates a new problem: you're now sharing sensitive operational data with a third party.
The local-first principle
Local-first means:
- Scans run on your device. Policy evaluation happens where the agent operates.
- No file contents leave your machine. We analyze behavior patterns, not your data.
- You control what's logged. Audit logs stay local unless you explicitly export them.
Why this matters for security tools
A security product that requires uploading your agent's activity to a remote server is asking you to trust that server with exactly the data you're trying to protect. That's a hard tradeoff.
With local-first:
- SSH keys, API tokens, and credentials never leave your device for analysis
- Your agent's tool-use history stays under your control
- Latency is minimal — blocking happens in real time, not after a round trip
What gets stored remotely
If you opt in to cloud features (planned for v1.0+):
- Event metadata: timestamps, risk scores, action types
- Policy configurations
- Account and billing information
File contents, credentials, and sensitive data flagged by your policies are never uploaded.
The tradeoff
Local-first means some advanced features (like cross-device anomaly detection) require more engineering work. We think that's worth it. Security tools should reduce your attack surface, not expand it.
Learn more about our security approach or join the waitlist.
Try Runtime Guard
See runtime security in action or request early access.