back to Blog
back to Blog

Lessons From the Trenches: The Coverage Assurance Gap

Visibility doesn’t equal coverage. We spend months stitching together tools to measure detection, only to find the data outdated the moment it’s finished. This post explores why static detection models fail in a dynamic world, the hidden complexity of coverage drift, and how AI reasoning engines are finally enabling continuous validation.

January 22, 2026
12 min read
Dylan Williams

Measuring coverage is hard not because organizations lack tools, talent, or process, but because detection is still treated as a static outcome in a fundamentally dynamic system. CDR, EDR, DLP, and SIEM deliver vast visibility, yet visibility alone does not equal coverage. 

Threats evolve daily, environments shift continuously, and telemetry changes quietly as products, schemas, and configurations update. Attackers now use AI to continuously modify their techniques. Even the most skilled analysts are forced to rely on an outdated model and assumptions that age faster than they can be reviewed.

The Reality of Detection Engineering

As a detection engineer, I was expected to answer a simple question: Are we covered? 

In practice, I never could provide a good answer, at least not with confidence. There was no single way to measure detection coverage, so I tried to build something that would help me measure it.

I stitched together everything I could: breach and attack simulation (BAS) tools, detection content libraries, SIEM data, purple team tracking spreadsheets. What followed was months of work. Multiple practitioners collaborated to enumerate use cases, align with stakeholders, threat-model the environment, debate priorities, and finally test and implement detections. Every step felt necessary. Every step was slow.

Months of Work, Zero Coverage Confidence

By the time we produced something that resembled a coverage view, it was already outdated. The enterprise had changed, new cloud services were added, new identities, and new data flows were created, and new threats emerged. The metrics we worked so hard to produce were inherently lagging, disconnected from the reality they were meant to describe.

Worse, the effort wasn’t repeatable. Measuring coverage meant restarting the same process again and again, with the same costs, the same efforts, and the same delays. I wasn’t lacking tools or talent. I was lacking a system designed to keep tracking our coverage and validate its current status.

Why Did Our Approach Fail

On paper, taking new threat information and turning it into a detection sounds like a simple process. But in reality this is a very difficult, very manual process that requires research, coding, testing, tuning and deploying detections across different platforms.

Measuring coverage is much more complicated. I became increasingly frustrated by how heavy this process was, and how little clarity it produced. Even after successfully writing detections, there was no reliable way to know whether they actually protected us and would fire as expected in real conditions. I initially believed emulation and breach-and-attack simulation (BAS) were the answer. In practice, they weren’t. The operational friction was significant, deployment was limited to parts of the environment, and the results were tightly coupled to specific attack paths. Coverage remained fragmented, conditional, and far from attack-surface agnostic.

Red Teams Can’t Provide Confidence

In my experience, red teams are incredibly valuable. Some of the most important detection gaps I’ve ever uncovered came from watching how they move through our environment in ways no dashboard ever showed. Red teams exposed blind spots that tools alone would never have surfaced.

The frustration was never about their effectiveness. But the reality is that red team engagements are expensive, time-intensive, and disruptive by nature, so they only happen periodically. What they gave us was a moment of clarity, a snapshot in time. And almost immediately after, the environment changed. New systems were deployed, configurations shifted, detections drifted, and the confidence we gained slowly faded. Between exercises, we were left without a way to know whether those same gaps had re-opened or new ones had formed, and no continuous signal telling us where we were truly exposed.

The Hidden Complexity Behind Missed Detections

Threat detection is a compound discipline by nature. It sits at the intersection of many moving parts: understanding how attackers behave, understanding how an organization’s environment actually works, and interpreting signals that are incomplete, noisy, and constantly changing. None of these elements exists in isolation, and success depends on all of them aligning at the same time.

The challenge is that this same interdependence is where failures compound too. A small change in the environment, a subtle shift in data, or a quiet assumption breaking can cascade into missed detections without any obvious point of failure. Nothing looks “broken,” yet coverage erodes underneath the surface.

That’s why detection failures are often discovered only after an incident, not when they happen. The system didn’t stop working - it drifted. In compound disciplines like threat detection, complexity doesn’t just make things harder to build; it makes problems harder to see, harder to measure, and harder to correct before they turn into real risk.

A Modern Detection Model for a Dynamic Reality

After years in the trenches, I stopped blaming tools, teams, or execution. The deeper issue is the model itself. We are asking detection engineering to succeed in a world of constant change using approaches designed for static environments. We built processes to create detections, but not systems to continuously prove they still worked.

I don’t believe defenders are failing. They’ve been constrained by a mental model that assumed coverage was something you achieved once and moved on from. In reality, coverage is dynamic, it drifts as threats evolve, systems change, and data shifts.

What’s changed is the arrival of new AI capabilities like reasoning engines and LLMs that can understand threats, environments, and context at the same time. For the first time, we can harness AI not just to automate tasks, but to build an entirely new detection model - one that thinks, adapts, and evolves as conditions change. That’s why I’m so excited about Spectrum.

Share on: