February 3, 2026

The Gaps That Created the New Wave of SIEM and AI SOC Vendors

Update (2026-02): I released the SIEM Maturity Framework Workbook (v1.0) that turns this post into a practical scoring tool.

I have been talking to a few AI SOC and new SIEM market entrants over the past few weeks. I have voiced some opinions in previous posts but have now started to capture a list of features that I believe represent the openings existing SIEM players have created in the market for these new vendors to emerge.

Before I outline what I think those features are, let me be clear: this is my list. I am aware that existing SIEM vendors will claim that they already do many of these things. All I will say is this: market churn and capital flow suggest that these capabilities are either not as mature or not as integrated as claimed.

And to the AI SOC companies and investors: be careful about the short-term problems your investments are solving. Yes, there is real traction with MSSPs that are overloaded with false positives. And yes, many will gladly pay to reduce alert workload by 80%. But in many cases, these problems are being addressed superficially. Make sure you audit the underlying approaches and verify that the foundational infrastructure is sound. Solving this problem on top of an existing detection infrastructure doesn’t solve the problem at the core, which is the detections themselves. We need to fix those with some of the suggestions below to not needing a top-layer, alert reducer.

Without further ado, here are the items I am tracking. I welcome other opinions and additions to the list (no guarantee I will include them). Over the coming weeks, I will also try to rate some of the players across these categories to enable comparison. I could use help with that. Ping me.

A. DATA & CONTROL PLANE ARCHITECTURE

  • Federation – The ability to query and reason over data where it lives, without forced centralization.
    (Another post following here at some point about the limitations of federation).
  • Data Pipeline Optimization – Dynamic ingestion pipelines that enrich, route, sample, and filter data based on use case, risk, and downstream value. Not static “send everything to the lake.”
  • Data Awareness – Understanding what data exists, what is missing, and what has silently degraded. The system must continuously reason about its own observability.
  • Performance as a First-Class Constraint – Fast joins and low-latency queries across all relevant data. Real-time rule execution at scale. This is not about basic scalability, but about maintaining predictable performance as rule count and complexity increase, without simply throwing more compute at the problem.
  • Modern AI Integration – The ability to integrate with emerging architectural patterns and frameworks, including MCP servers, vector stores, and related systems.

B. DETECTION & LEARNING SYSTEMS

  • Hypothesis-Driven Hunting – Hunting should start with explicit hypotheses, not ad-hoc queries. These hypotheses should evolve, fork, and self-update based on outcomes. Agents swarms anyone?
  • Automated Detection Tuning (Closed Loop) – Detections must evaluate their precision and recall over time. False positives and false negatives are signals. Humans stay in the loop, but are not the tuning engine. This also helps separate the detection engineering from the tuning that should be done by analysts.
  • Environment-Adaptive Detections – Rules and models must adapt automatically to the specific environment, business processes, and user behavior and analyst feedback. Generic detections are table stakes.
  • Detection Lineage and Memory – The system must remember why a detection exists, how it has changed, and what outcomes it has historically produced.

C. ENTITY-CENTRIC RISK & CONTEXT

  • Asset Awareness – Effective protection and detection start with understanding what is being protected. Entity visibility is foundational: who owns this entity, what does it do, and which business processes does it support?
  • Real-Time Entity Risk Scoring – Each entity has a continuously updated risk score driven by behavior, exposure, and contextual signals.
  • Entity Risk Context – Risk is not a number. It is a set of properties that help explain the risk and provide context for decision making.
  • Business Context Integration – Entities must be tied to business processes, ownership, and criticality, and this context must inform alert generation and prioritization. Some people have started calling this the Context Graph.

D. OPERATIONAL REALITY (SOC, MSSP, ENFORCEMENT)

  • Simple Query Interface: Support for both natural language and structured query languages (such as KQL). Analysts need both.
  • Alert Triage Automation – Using ‘advanced’ context to tune detections. Ideally we have business context available to continuously improve our detections.
  • Blindspot Detection – The system must actively identify where detections cannot exist due to missing or degraded logs or logging configurations. This includes making sure that log sources are actually staying up and keep reporting what they have to.
  • Real-Time Readiness for Enforcement – We need our systems to become preventative. Therefore, its risk model must operate in near real time. Attackers are acting too fast.

A Few Additional Comments for Context

This is not meant to be a SIEM RFP. I am intentionally not listing table-stakes capabilities such as basic scalability, data source support, or baseline detection depth.

This list is less about features than about where intelligence and control actually live in the system. I am also not being prescriptive on how these features are built. Many of them can benefit from AI / LLM / ML approaches and, in fact, should be using them.

Look at the list, then look at your AI SOC platform of choice. How much of the above does it truly cover?

If you are evaluating an AI SOC platform and most of its value proposition lives above alerts rather than below them, you should be skeptical.

Update (2026-02): I released the SIEM Maturity Framework Workbook (v1.0) that turns this post into a practical scoring tool.

January 16, 2026

How AI Impacts the Cyber Market and The Future of SIEM

Security has always moved in waves. Not because we suddenly get smarter, but because we learn from past mistakes, identify gaps, hit limits, need to protect new technologies, and then go and do our best to solve those new security challenges with the technologies at hand.

The era of AI (let’s be clear, we have had AI for a long time; what I mean specifically is the advent of Large Language Models) has shifted many industries, but specifically security in a particularly revealing way. AI did not just give us new tools to solve security problems. It invited innovators and entrepreneurs to revisit pretty much every security technology to see if LLMs could be useful to address some of the existing challenges. But that’s not where things stopped. More interestingly, some teams used this moment to question whether the underlying approaches themselves still made sense at all. Not just whether LLMs could help, but whether modern data architectures, different telemetry choices, and different enforcement models could fundamentally change outcomes.

That is what has triggered a real wave of new companies in cyber, including across markets that many considered mature, or even stagnant, like SIEM.

The Five Phases We Just Lived Through

Let’s take a non-scientific look at how major security approaches evolved over the past 25 years. This is not exhaustive, but it helps explain where we are today.

1. Network-Centric Prevention

Back, many moons ago, we started with firewalls, IDS, and later IPS. The model was simple. Look at packets. Stop bad things. It worked until attackers learned to look normal.

2. More Data, Centralized, Higher-Level Insights

When network telemetry created too many false positives, we added vulnerability data and authentication events and fed them into a SIEM to correlate. The results were “mixed”. Fortunately for the SIEM market, compliance and audit requirements emerged, mandating long-term log retention. This gave SIEM a durable justification, even when its security value was debated. SIEM became indispensable for visibility and forensics, but increasingly disconnected from real-time decision making.

3. Back to Prevention and Response

As SIEM alert volumes exploded and analysts could not keep up, the industry pivoted. EDR. NDR. SOAR. We all know how that played out. NDR never truly broke out. EDR became a major category. SOAR largely collapsed back into SIEM. And eventually, most large EDR vendors added a SIEM to their portfolio.

This was not convergence by design. It was convergence driven by operational gravity.

4. AI Triggers a Reality Check

LLMs made many believe they could simply layer AI on top of broken architectures. Some startups did exactly that. They will likely not be the long-term winners.

The more interesting group of companies used AI as a forcing function to re-examine first principles. What data actually matters? What can realistically be prevented at the edge? What must still be correlated centrally? What is structurally broken in SOC workflows? Where have we been compensating for bad architecture with human labor? Crucially, many of these answers have little to do with LLMs themselves, and much more to do with data fidelity, placement of control, and modern system design.
This is where the real innovation is happening.

5. The Convergence

We are now in a phase where prevention is moving back to the edge, while analytics and orchestration remain central. Endpoints are smarter. Browsers are instrumented. Networks are being re-observed. Context is finally treated as a first-class input.

But there is still a SOC. There is still a central nervous system that correlates, reconstructs, explains, orchestrates, and proves what happened. Call it SIEM, security analytics, XDR, or AI SOC. The name is irrelevant. The function is not.

In parallel, we are realizing that we can push enforcement / prevention back to the edge. Wherever we have enough information, execute at the edge. Where we don’t, call out to your central nervous system. To your brain. The brain (your SIEM) that understands at any moment in time, what the risk and function is of every entity in your network. And use that information for decision making.

Why AI SOC Will Collapse Back Into SIEM

Many startups brand themselves as “AI SOC”. What do they actually do?

They primarily ingest alerts from EDR, NDR, SIEMs, and cloud platforms, then attempt to determine which ones matter. They add context, apply behavioral analysis, and suppress false positives.

In other words, they attempt to do what SIEM, UEBA, and SOAR were always supposed to do, just with better math and more compute. However, there is one problem. Many of the AI SOC contenders operate on alert streams. That means they start from already lossy, opinionated data. Real behavioral analysis does not on top of alert streams. It lives in raw telemetry. Email flows. Network sessions. Browser actions. Endpoint system behavior.

Once an AI SOC platform decides to ingest that raw data directly, it immediately recreates the ingestion, normalization, storage, and correlation problems that SIEM already exists to solve. At that point, the separation no longer makes sense. This is exactly why UEBA and SOAR collapsed back into SIEM. And it is why AI SOC will do the same.

There will be one place where data is reconciled, correlated, and turned into decisions. That place will increasingly run on federated, near-real-time architectures rather than twenty-year-old indexing engines. But their function remains the same. Call it whatever you want. It needs to be one system, not many and it doesn’t care what you call it.

The Shift Is Not Just Technical. It Is Organizational.

What is interesting to note about these new entrants in the SIEM or security analytics space is not just their security architecture. It is the company architecture. Modern security startups are being built on AI-native operating systems: Sales calls are captured and analyzed, not just by sales, but product teams mine them for competitive signals, marketing uses them to refine messaging, engineering uses them to prioritize roadmaps. This is not a tooling upgrade. It is a fundamentally different operating model.

Imagine a system where the vision, mission, strategy, and priorities are centrally maintained, updated and codified. Every function consumes that shared intelligence to drive decisions, messaging, and execution. This does not just improve alignment. It dramatically compresses learning cycles and execution speed. And that, more than any individual feature, may be the hardest thing for incumbents to replicate.