“Ok, but how do I evaluate vendors consistently without falling back into feature checklists and marketing claims?”
So I turned the framework into a practical scoring workbook you can use to rate a platform across the dimensions I described in the post. The workbook allows you to rate each category from 1 to 5 and I spent some time defining what a 1 versus a 5 means in each of the categories. I give you an example for the “Data Pipeline Optimization” category. Here are the 5 maturity steps:
1 | Static ingestion pipelines that forward all data to a central store.
2 | Basic filtering or routing based on source or log type.
3 | Conditional enrichment and routing based on use case or predefined alerts/rules.
4 | Dynamic pipelines that adapt sampling, enrichment, and routing based on downstream value.
5 | Continuously optimized pipelines driven by feedback loops from detections, cost, and analyst outcomes.
I hope the breakdown into these 5 values helps going through a more ‘objective’ assessment of these platforms and also shows what excellent looks like in each of these categories.
What this is
The Security Analytics Platforms – Maturity Framework workbook is an architecture-first tool to evaluate security platforms across architectural, detection, and operational dimensions. It is designed to help you compare systems based on their advanced capabilities that are desperately needed to deliver a SIEM experience that is adequate for 2026..
What this is not
This is not a vendor ranking, a feature checklist, or a replacement for hands-on testing. It’s also NOT an RFP template. As I indicated in my previous blog where I outlined all the different categories, the table stakes are not mentioned or evaluated.
How to use it in 10 minutes
Add one vendor per row in the rating sheet.
Score each topic based on current behavior, not roadmap promises.
Review category roll-ups and the heatmap to spot structural gaps.
A key insight: large gaps between category scores often matter more than the overall score.
Security analytics is in the middle of a reset. Incumbent SIEMs are being re-architected, new SIEM startups are emerging, and AI SOC vendors are rewriting parts of the operating model. End users and investors need a way to evaluate these platforms objectively, beyond feature checklists and marketing claims. This workbook is my attempt to make that evaluation repeatable, comparable, and anchored in the areas that I see missing or deficient in the incumbent SIEM space.
If you use it, I’d love your feedback
If you score a platform with it, send me the heatmap. I’ll incorporate feedback into future versions and I’d love to crowdsource an assessment of all the SIEM and AI SOC vendors out there. Can we do it?
I have been talking to a few AI SOC and new SIEM market entrants over the past few weeks. I have voiced some opinions in previous posts but have now started to capture a list of features that I believe represent the openings existing SIEM players have created in the market for these new vendors to emerge.
Before I outline what I think those features are, let me be clear: this is my list. I am aware that existing SIEM vendors will claim that they already do many of these things. All I will say is this: market churn and capital flow suggest that these capabilities are either not as mature or not as integrated as claimed.
And to the AI SOC companies and investors: be careful about the short-term problems your investments are solving. Yes, there is real traction with MSSPs that are overloaded with false positives. And yes, many will gladly pay to reduce alert workload by 80%. But in many cases, these problems are being addressed superficially. Make sure you audit the underlying approaches and verify that the foundational infrastructure is sound. Solving this problem on top of an existing detection infrastructure doesn’t solve the problem at the core, which is the detections themselves. We need to fix those with some of the suggestions below to not needing a top-layer, alert reducer.
Without further ado, here are the items I am tracking. I welcome other opinions and additions to the list (no guarantee I will include them). Over the coming weeks, I will also try to rate some of the players across these categories to enable comparison. I could use help with that. Ping me.
A. DATA & CONTROL PLANE ARCHITECTURE
Federation – The ability to query and reason over data where it lives, without forced centralization. (Another post following here at some point about the limitations of federation).
Data PipelineOptimization – Dynamic ingestion pipelines that enrich, route, sample, and filter data based on use case, risk, and downstream value. Not static “send everything to the lake.”
Data Awareness – Understanding what data exists, what is missing, and what has silently degraded. The system must continuously reason about its own observability.
Performance as a First-Class Constraint – Fast joins and low-latency queries across all relevant data. Real-time rule execution at scale. This is not about basic scalability, but about maintaining predictable performance as rule count and complexity increase, without simply throwing more compute at the problem.
Modern AI Integration – The ability to integrate with emerging architectural patterns and frameworks, including MCP servers, vector stores, and related systems.
B. DETECTION & LEARNING SYSTEMS
Hypothesis-Driven Hunting – Hunting should start with explicit hypotheses, not ad-hoc queries. These hypotheses should evolve, fork, and self-update based on outcomes. Agents swarms anyone?
Automated Detection Tuning (Closed Loop) – Detections must evaluate their precision and recall over time. False positives and false negatives are signals. Humans stay in the loop, but are not the tuning engine. This also helps separate the detection engineering from the tuning that should be done by analysts.
Environment-Adaptive Detections – Rules and models must adapt automatically to the specific environment, business processes, and user behavior and analyst feedback. Generic detections are table stakes.
Detection Lineage and Memory – The system must remember why a detection exists, how it has changed, and what outcomes it has historically produced.
C. ENTITY-CENTRIC RISK & CONTEXT
Asset Awareness – Effective protection and detection start with understanding what is being protected. Entity visibility is foundational: who owns this entity, what does it do, and which business processes does it support?
Real-Time Entity Risk Scoring – Each entity has a continuously updated risk score driven by behavior, exposure, and contextual signals.
Entity Risk Context – Risk is not a number. It is a set of properties that help explain the risk and provide context for decision making.
Business Context Integration – Entities must be tied to business processes, ownership, and criticality, and this context must inform alert generation and prioritization. Some people have started calling this the Context Graph.
D. OPERATIONAL REALITY (SOC, MSSP, ENFORCEMENT)
Simple QueryInterface: Support for both natural language and structured query languages (such as KQL). Analysts need both.
Alert Triage Automation – Using ‘advanced’ context to tune detections. Ideally we have business context available to continuously improve our detections.
Blindspot Detection – The system must actively identify where detections cannot exist due to missing or degraded logs or logging configurations. This includes making sure that log sources are actually staying up and keep reporting what they have to.
Real-Time Readiness for Enforcement – We need our systems to become preventative. Therefore, its risk model must operate in near real time. Attackers are acting too fast.
A Few Additional Comments for Context
This is not meant to be a SIEM RFP. I am intentionally not listing table-stakes capabilities such as basic scalability, data source support, or baseline detection depth.
This list is less about features than about where intelligence and control actually live in the system. I am also not being prescriptive on how these features are built. Many of them can benefit from AI / LLM / ML approaches and, in fact, should be using them.
Look at the list, then look at your AI SOC platform of choice. How much of the above does it truly cover?
If you are evaluating an AI SOC platform and most of its value proposition lives above alerts rather than below them, you should be skeptical.