Blog
The Builder's Guide to Agent Security is a 12-post series covering the threats AI agents face, why traditional security fails for autonomous systems, and the concrete steps builders need to lock them down — from zero-trust architecture and defense in depth to operational readiness and honest lessons learned.
Start with Post 1 for the full arc, or jump to any topic below.
Getting Started with GuardClaw
A step-by-step walkthrough of setting up GuardClaw, your first security layer for AI agents. From install to your first security report in five minutes.
Live now
Watching Your Agents Work
GuardClaw's supervised execution wraps any agent command, intercepts threats in real time, and builds a tamper-evident audit trail. Here's what that looks like, step by step.
Live now
What Happens When Agents Outnumber People?
Machine identities outnumber humans 25-50x in most enterprises. AI agents will widen the gap. Governance frameworks built for human-majority organizations are expiring.
Live now
Your Security Dashboard
The GuardClaw dashboard shows threat stats, audit trails, and compliance alignment in one place. Here's how to read it and what the numbers mean.
Live now
GuardClaw and GDPR: What Maps Where
When your AI agent processes personal data, GDPR applies. Here's how GuardClaw's controls map to the requirements that matter most.
Live now
GuardClaw and SOC 2: A Control Mapping
A practical guide to mapping GuardClaw's security controls to SOC 2 Trust Services Criteria. Which controls GuardClaw satisfies and what evidence to show your auditor.
Live now
GuardClaw and the EU AI Act
The EU AI Act's enforcement provisions take effect August 2026. Here's what applies to AI agent deployments and how GuardClaw's controls map to the requirements.
Live now
How GuardClaw Is Different
There are other approaches to AI agent security. Here's where GuardClaw fits, what trade-offs we made, and why we made them.
Live now
Rolling Out GuardClaw Across a Team
How to deploy GuardClaw for a development team — shared workspaces, consistent policies, and a single dashboard for everyone's agent activity.
Live now
Setting Up Alerts and Monitoring
How to get notified when GuardClaw catches something important — without watching the dashboard all day.
Live now
Setting Up GuardClaw for Claude Code
A step-by-step guide to integrating GuardClaw with Claude Code using hooks. Every tool call gets checked before execution.
Live now
Setting Up GuardClaw for Cursor
How to add GuardClaw's security layer to Cursor's AI agent. Same protection, different integration path.
Live now
The Detection Engine: How It Works
GuardClaw checks 1,000+ patterns in under a millisecond. Here's the tiered architecture that makes that possible — Bloom filters, Aho-Corasick, RE2 regex, and anomaly detection.
Live now
What the Receipt Chain Proves
GuardClaw's receipt chain is a tamper-evident audit trail for everything your AI agents do. Here's how it works, what it proves, and why auditors care.
Live now
What to Do When GuardClaw Blocks Something
Your agent hit a denial. Is it a real threat or a false positive? Here's how to read the denial, investigate, and decide what to do next.
Live now
Writing Your First Security Policy
GuardClaw policies define what your agent can and can't do. Here's how to write one, what the defaults mean, and how to adjust them without breaking your workflow.
Live now
What New Hires and AI Agents Have in Common
Your company has an onboarding process for people. It probably doesn't have one for agents. The same trust-building patterns apply to both.
Live now
The Friday Agent Permission Audit [Checklist]
A 90-minute permission audit you can run before the weekend. Nine checks, one agent at a time, measurable results by Monday.
Live now
Three Layers of Agent Permission Scoping
Agent permissions need three layers: identity (who is this?), scope (what can it access?), and context (should it access this right now?). Here's how to build them.
Live now
Least Privilege Wasn't Built for Agents
The principle of least privilege assumes a human on the other end. When the user makes 10,000 decisions per hour, the implementation needs to change.
Live now
Why Your Agent Has More Access Than You
70% of security leaders say AI agents have more system access than humans in the same role. Here's how the default got this backwards.
Live now
4.5x More Incidents Start with One Setting
Teleport's 2026 research found that over-privileged AI agents experience 4.5x more security incidents. One default setting explains most of the gap.
Live now
NIST Wants Agents Governed Like Employees [2026]
NIST's AI Agent Standards Initiative signals a future where agents need identity, accountability, and lifecycle management — just like the people who build them.
Live now
We Trust Systems We Can't Inspect Every Day
From plumbing to power grids to AI agents — humans routinely trust invisible infrastructure. That trust works until it doesn't.
Live now
Agent Supply Chain Security in 5 Steps [2026]
A five-step checklist for securing your AI agent's supply chain — from skill vetting to dependency pinning to runtime monitoring.
Live now
Audit Your Agent's Trust Boundaries This Week
A practical guide to mapping and testing every trust assumption your AI agents make — from network access to credential scope to tool permissions.
Live now
70% of Enterprises Can't See Their Own Agents
Nearly 70% of enterprises run AI agents in production. Most can't tell you how many they have, what they access, or who owns them. That's identity dark matter.
Live now
820 Malicious Agent Skills and Nobody Noticed
Koi Security found 820+ malicious skills on ClawHub, up from 324 weeks earlier. Agent marketplaces are the new attack vector builders aren't watching.
Live now
One Localhost Assumption Gave Hackers Full Control
The OpenClaw ClawJacked vulnerability shows how a single implicit trust assumption in an AI agent framework let any website take over a developer's machine.
Live now
5 Things Due Before August 2 [EU AI Act Checklist]
The EU AI Act high-risk deadline hits August 2. Five compliance actions you can start this week, with a printable checklist.
Live now
Microsoft Found a New Way to Poison AI Recommendations
Microsoft discovered that summarize buttons can be weaponized. Recommendation poisoning is the supply chain attack nobody planned for.
Live now
NIST Wants to Know How You Secure Your Agents [RFI Breakdown]
The NIST AI Agent Standards RFI just closed. Here's what it asked, what it signals, and what to prepare before April.
Live now
One Firebase Misconfig Leaked 300M Chat Messages
An AI chat app with 50M users left a Firebase database open. A researcher found 300 million messages from 25 million people.
Live now
Prompt Injection Just Got Classified as Malware
Researchers want prompt injection reclassified as malware. A $40K bounty from UK AISI, OpenAI, and Anthropic is testing why.
Live now
How Fast Can an Attacker Hijack Your Agent?
CrowdStrike says attack timelines are under 72 minutes. Your agent verification loop probably takes longer than that.
Live now
88% of AI Agents Shipped Without Security Sign-Off
Gravitee's 2026 data: only 14% of orgs got full security approval before deploying agents. Here's what the other 88% have in common.
Live now
The Builder's Responsibility
Medieval cathedral builders laid foundations for structures they'd never see completed. We're in a cathedral-building moment for AI. The decisions made today about agent safety will shape autonomous systems for decades.
Live now
What We Got Wrong (And Changed)
This is the post companies don't write. We're writing it anyway because showing the work, including the wrong turns, builds more trust than pretending we got everything right the first time.
Live now
Score Yourself: The Operator Readiness Assessment
In video games you can see your stats. In agent security, most teams have no idea where they stand. A 15-minute self-assessment across five dimensions tells you exactly what to fix next.
Live now
The 30-Day Agent Security Checklist
No philosophy. No metaphors. Just the steps. Four weeks to go from 'we should probably secure our agents' to 'we have a tested, documented security posture.' Start Week 1 today.
Live now
Why We Don't Use AI to Make Security Decisions
We're an AI security company that doesn't use AI for deny/allow decisions. Probabilistic models are incredible for detection and triage. They are unreliable for enforcement. Here's why that distinction matters.
Live now
Seven Layers of Defense (And Why You Need All of Them)
Most agent security uses one or two layers: input filtering and maybe an output check. That's a bouncer at the front door and no one watching anything else. Here's what defense in depth actually looks like.
Live now
Security Is a Primitive, Not a Feature
You don't ship a database and add data persistence later. Security is load-bearing architecture that gets exponentially more expensive to retrofit. Three primitives every agent system needs before first deploy.
Live now
Build Like You'll Get It Wrong
The best engineering teams don't plan for success. They plan for failure and design recovery into every system. Resilience beats perfection in production, in careers, and in life.
Live now
Zero Trust Was Built for Humans. Your Agents Aren't Human.
Zero trust principles still hold for AI agents, but the implementation needs a complete rethink. Agents operate in milliseconds, chain tools autonomously, and make decisions that weren't explicitly requested.
Live now
Everyone's Worried About Prompt Injection. That's the Easy Problem.
Prompt injection gets the headlines, but six other AI agent attack vectors cause more damage and get less defense investment. Mapping your full attack surface takes 30 minutes and changes how you think about security.
Live now
The Identity Problem (Yours and Your Agent's)
Non-human identities vastly outnumber human users in enterprise environments, yet most organizations manage agent credentials with the same rigor they'd give a shared Netflix password.
Live now
Why We Built GuardClaw
AI agents moved from demos to operators. The threat model changed faster than most teams' defenses.
Live now
Your AI Agent Has No Seatbelt
AI agents are shipping into production faster than safety standards can keep up. Teams deploying autonomous agents need runtime security controls before the first serious incident forces regulation on everyone.
Live now