
Update: Instead of an Excel spreadsheet, here is an online app that you can use. I’d love for you to submit your own ratings so we can crowd-source some of these answers!
Over the last few weeks I published a post on the architectural and operational gaps that created the new wave of SIEM and AI SOC vendors. A bunch of people asked the same follow-up question:
“Ok, but how do I evaluate vendors consistently without falling back into feature checklists and marketing claims?”
So I turned the framework into a practical scoring workbook (and now a small Web application) you can use to rate a platform across the dimensions I described in the post. The workbook allows you to rate each category from 1 to 5 and I spent some time defining what a 1 versus a 5 means in each of the categories. I give you an example for the “Data Pipeline Optimization” category. Here are the 5 maturity steps:
- 1 | Static ingestion pipelines that forward all data to a central store.
- 2 | Basic filtering or routing based on source or log type.
- 3 | Conditional enrichment and routing based on use case or predefined alerts/rules.
- 4 | Dynamic pipelines that adapt sampling, enrichment, and routing based on downstream value.
- 5 | Continuously optimized pipelines driven by feedback loops from detections, cost, and analyst outcomes.
I hope the breakdown into these 5 values helps going through a more ‘objective’ assessment of these platforms and also shows what excellent looks like in each of these categories.
What this is
The Security Analytics Platforms – Maturity Framework is an architecture-first tool to evaluate security platforms across architectural, detection, and operational dimensions. It is designed to help you compare systems based on their advanced capabilities that are desperately needed to deliver a SIEM experience that is adequate for 2026..
What this is not
This is not a vendor ranking, a feature checklist, or a replacement for hands-on testing. It’s also NOT an RFP template. As I indicated in my previous blog where I outlined all the different categories, the table stakes are not mentioned or evaluated.
How to use it in 10 minutes
- Add one vendor per row in the rating sheet.
- Score each topic based on current behavior, not roadmap promises.
- Review category roll-ups and the heatmap to spot structural gaps.
A key insight: large gaps between category scores often matter more than the overall score.
Use the Web App
Click on the image to launch the app…
Download
Why I’m releasing this
Security analytics is in the middle of a reset. Incumbent SIEMs are being re-architected, new SIEM startups are emerging, and AI SOC vendors are rewriting parts of the operating model. End users and investors need a way to evaluate these platforms objectively, beyond feature checklists and marketing claims. This workbook is my attempt to make that evaluation repeatable, comparable, and anchored in the areas that I see missing or deficient in the incumbent SIEM space.
If you use it, I’d love your feedback
If you score a platform with it, use the Web app and submit your rating. You need to log in via Github or Google so I don’t get flooded with fake entries. I’d love to crowdsource an assessment of all the SIEM and AI SOC vendors out there. Can we do it?
