back to Blog
back to Blog

Tackling the “Unsolvable” Detection Problem

Visibility is not coverage. After decades of manual engineering and 'heroic' efforts, the pieces are finally in place to solve the detection gap. Introducing Spectrum.

January 12, 2026
12 min read
John DiFederico

I entered my career in security the way many of us did, with a sense of confidence that the hard part had already been solved. We had just invested in an expensive SIEM platform, and I genuinely believed it would make finding bad actors almost trivial. In my mind, a million-dollar SIEM meant clarity. Signals in, threats out. Problem solved.

Then I met reality.

The first time I worked with real SIEM outputs, the illusion collapsed. Noise wasn’t an edge case - it was the default. Alerts were ambiguous, detections were unstable, and logic broke constantly as data shifted underneath it. Instead of answers, we had a lot of maybes. Instead of confidence, we had assumptions.

That gap bothered me enough to change my path. I moved out of pure analyst work and into detection engineering, not because it was clearly defined, but because it was so clearly needed. Someone had to figure out what we could actually detect, what we were missing, and what gaps needed to be addressed. Someone had to wrestle with telemetry, logic, assumptions, and failure modes to define our detection strategy, and then turn that into detections that caught real threats before they became incidents.

By 2010 and 2011, I was speaking at SIEM user conferences about the detection engineering lifecycle and how to measure detection coverage. Even then, the message was simple: visibility is not coverage, and if you can’t measure what you detect, you don’t really know how exposed you are.

What frustrates me is that fifteen years later, the industry is still largely trudging through the same mud. We’ve added better tools, more data, and smarter people, but we still don’t have a good way to know what we can detect, understand what we’re missing, or systematically close those gaps. Detection still depends on the heroic efforts of committed individuals. Yet coverage requires more than a single detection. At the end of the day, it still takes an army of people to make a dent.

And that’s the real problem.

More Tools, Same Unanswered Questions

As the market matured, a familiar pattern emerged: a steady wave of “this will solve it” approaches layered on top of existing detection practices. SIEM vendors shipped ever-larger default content packs. Public and private rule repositories promised faster coverage. Threat intelligence platforms delivered feeds, enrichment, and context through standards like STIX/TAXII. MITRE ATT&CK heat maps offered visual reassurance. Alert-centric metrics like MTTD and MTTR filled executive dashboards. Pentests, red teams, and breach-and-attack simulation tools added testing signals.

All of this had value. These tools generated useful insights, inputs, and tests. They accelerated individual tasks and improved pieces of the workflow. But none of them closed the core loop.

Even with all of this in place, teams are still struggling to answer the questions that actually mattered: Are we detecting the threats that matter? Are we detecting them the right way in our environment? 

In practice, these fragmented tools and disconnected data left us trying to manufacture confidence from signals that couldn’t actually prove outcomes.

The result is uncomfortable but clear. We added more tools. We added more event data. But we didn’t get better at proving detection outcomes. Confidence still depended on assumptions.

A Meaningful Step Forward

Joining Exabeam marked a meaningful chapter in my journey, one I’m genuinely proud of. The industry was witnessing the rise of UEBA and behavior-based detection, and Exabeam was rebuilding SIEM technology to be ML-first. That mattered to me. An ML-first approach hinted at something we had been missing for years: detections that could tune themselves, adapt to change, and reduce the constant babysitting, testing, and fixing that defined traditional detection work. It felt like a real step toward the long-term mission - detection that works automatically, at scale.

And it was real progress. UEBA unlocked detection methods that simply weren’t possible with static rules alone. It enabled us to catch attackers in ways that were genuinely innovative, and the impact was tangible. I’m proud of what we built and the value it delivered.

But was the mission accomplished? Not quite.

The core problem never went away. Even with UEBA, we still struggled to answer the fundamental questions: Are we detecting the threats that matter? Are we detecting them the right way for this environment? 

UEBA added powerful capabilities, but it also required another detection engine, another repository, another layer in an already fragmented technology stack. It didn’t give the industry a clean, reliable way to measure what was actually covered, identify gaps with clarity, or systematically close them.

That realization is where my obsession deepened. My goal was never to add another layer, it was to solve the foundational problem. And I’m still hungry to finish that work.

The Inflection Point We Were Waiting For

For most of my journey, I believed that the hardest parts of detection were fundamentally human. Reasoning through ambiguous signals, understanding context, adapting logic to a specific environment - these were not “scriptable”, they required human intervention. That’s why detection maturity relied on expensive specialists, large teams, or outsourced services. You couldn’t automate judgment, so you staffed it.

This constraint has finally changed.

We’re entering an inflection point where systems can now apply reasoning programmatically. Work that once required expert intuition can now become repeatable, scalable, and continuous. The human reasoning that was once “unscriptable” is becoming systematizable, not by hard-coding more logic, but by enabling systems to reason, adapt, and validate outcomes as environments change.

Finally, Detection Is About To Change

I’ve spent my career chasing this problem, watching the industry circle it without truly breaking through. Every experience led me to the same conclusion: the problem wasn’t lack of effort or intent, it was the model itself. 

Now the missing layer is finally possible, and it can solve the problem I’ve been chasing since my earliest SOC days.

To be clear:

  • It isn’t about investigating alerts.
  • It isn’t another rule repository.
  • It isn’t a thin wrapper that converts English into rule syntax.

After years of living with this problem, seeing its limits from every angle, it finally feels like the pieces are in place to do something fundamentally different. We’re building Spectrum to tackle the part of detection that never quite worked, and the momentum is real. I’m genuinely excited about what’s coming, and can’t wait to share more very soon.

Share on: