We Trust Systems We Can't Inspect Every Day
Field Guide
We Trust Systems We Can't Inspect Every Day
From plumbing to power grids to AI agents — humans routinely trust invisible infrastructure. That trust works until it doesn't.
Key takeaway
Humans trust invisible infrastructure constantly — water, electricity, bridges, cloud providers — and that trust is usually justified
Key takeaway
The failure mode of invisible trust is catastrophic and slow to detect: damage accumulates before anyone notices
Key takeaway
For every invisible system your agents depend on, answer one question: what would I notice if this failed silently?
You drank water this morning without testing it. You drove over a bridge without checking last year’s inspection report. You’re running AI agents right now whose internal state you haven’t verified since you deployed them.
Humans trust invisible infrastructure constantly, and that trust usually works. The failure mode is specific and dangerous: damage accumulates in silence before anyone notices. For every invisible system your agents depend on, you need an answer to one question: what would I notice if this failed silently?
This is fine. Most of the time.
The Invisible Compact
We live inside a web of systems we can’t inspect. Not won’t. Can’t. The municipal water line running under your street? You’ve never seen it. The structural welds holding up the highway overpass? You can’t see those either. The open-source package your application depends on, updated three versions ago by a stranger in Singapore? Same deal.
This works because someone else is doing the inspection. Water utilities have labs and protocols. Bridge engineers have inspection schedules. Package managers have security scanning now, sort of. We’ve outsourced the verification to institutions and communities we generally trust.
The problem is invisible trust has a specific failure mode: the damage happens in silence.
When Invisibility Breaks
In 2014, the water system in Flint, Michigan switched sources to save money. The new source was corrosive. Copper pipes started failing. Lead leached into the water supply.
Here’s the thing: people kept drinking the water for months before anyone noticed. The damage was happening invisibly. A mother didn’t realize her child’s health was deteriorating because of a failed water treatment system. The system was invisible. The failure was invisible. The harm was invisible until the numbers got undeniable.
On August 1, 2007, the I-35W bridge in Minneapolis collapsed during rush hour. Thirteen people died. The bridge had been structurally deficient for years. Inspectors kept finding problems. The system for acting on those inspections was broken. The bridge was checked by humans, regularly, but the inspection data wasn’t turning into action. The damage was documented and invisible at the same time.
In March 2016, a developer named Azer Koçulu unpublished a small npm package called left-pad. It was 11 lines of code. The package had 2.5 million weekly downloads because thousands of other packages depended on it. Suddenly, builds started failing across the JavaScript ecosystem. Nobody had noticed how critical this invisible dependency was until it vanished.
These aren’t edge cases. They’re patterns.
The Bridge to Your Systems
Now think about your AI agents.
They depend on invisible infrastructure: LLM APIs you didn’t train and can’t inspect. Model behavior that changed with the last version update. Vector databases storing embeddings you’ve never examined. Cloud services handling traffic patterns you don’t monitor. Prompt injection vulnerabilities in dependencies you didn’t write.
You built something smart that sits on top of something opaque that depends on something else that nobody fully understands.
The trust is warranted, mostly. These systems work. But the failure mode is the same as Flint and the I-35W bridge. Damage accumulates in silence. An agent makes a wrong decision. It happens again. The problem compounds. By the time you notice, the system has been producing bad outputs for weeks.
The water system in Flint worked fine until it didn’t. The bridge in Minneapolis was “safe enough” until the moment it wasn’t. Left-pad was stable until it was gone.
Invisible systems don’t fail gradually. They fail suddenly, and we notice only after the damage is catastrophic.
What You Can Actually Do
You can’t inspect everything. That’s not the point. The point is knowing what you’re willing to leave uninspected and having a way to notice when it breaks.
For every system your agents depend on — every API, every model, every framework, every data source — answer one question: What would I notice if this system failed silently?
If the answer is “I don’t know” or “probably nothing for a while,” you need more visibility.
Not because everything needs constant inspection. But because silence is dangerous. You need a signal. A log you check. A metric that moves. A test you run weekly. A way to know that the invisible infrastructure is still doing what you thought it was doing.
Flint had water quality tests. They just weren’t being communicated to public health officials who could act. The Minneapolis bridge had inspections. They just weren’t being routed to decision-makers. Left-pad had version control. Nobody was watching what packages depended on it.
The infrastructure itself wasn’t the problem. The broken link between observation and action was.
For your agents: What breaks the silence? What metric means “this is still working as expected”? What test catches it when something shifts? What triggers your attention?
Write it down. Set it up. Check it.
The hard part is staying awake about what you can’t see while the system runs.
Tomorrow: NIST just told the industry that agents need governance the same way employees do. Onboarding, credentialing, monitoring, review, offboarding. The AI Agent Standards Initiative launched in February, and the implications for builders are concrete.
Join the Intelligence Brief
Threat intelligence, agentic vulnerabilities, and engineering frameworks delivered straight to your inbox.