And why most of the arguments do not hold up under scrutiny
Over the past 18 to 24 months, venture capital has flowed into a fresh wave of SIEM challengers including Vega (which raised $65M in seed and Series A at a ~$400M valuation), Perpetual Systems, RunReveal, Iceguard, Sekoia, Cybersift, Ziggiz, and Abstract Security, all pitching themselves as the next generation of security analytics. What unites them is not just funding but a shared narrative that incumbent SIEMs are fundamentally broken: too costly, too siloed, too hard to scale, and too ineffective in the face of modern data volumes and AI-driven threats.
This post does not belabor each startup’s product. Instead it abstracts the shared assertions that justify recent funding and then stresses them to see which hold up under scrutiny. I am not defending incumbents. I am trying to separate real gaps from marketing (and funding) narratives.
The “SIEM is Broken” Narrative
A commonly cited industry report claimed that major SIEM tools cover only about 19% of MITRE ATT&CK techniques despite having access to data that could cover ~87%. That statistic is technically interesting but also deeply misleading: ATT&CK technique coverage is not an operational measure of detection quality or effectiveness, it primarily reflects rule inventory and tuning effort. Nevertheless, it has become a core justification for the “SIEM is obsolete” narrative. I wasn’t able to find the original report to validate what and how they tested, but I have seen SIEMs that cover much more and have big detection teams taking care of these issues.
The Five Core Claims Driving the Market Thesis
Across decks, interviews, and marketing copy, I picked five recurring themes that define what these companies think incumbents get wrong and what investors are underwriting as the path forward.
1. “Centralized SIEM architectures no longer scale”
The claim is that forcing security telemetry into a centralized repository is too expensive and too slow for modern enterprises generating terabytes of logs every day. The proposed fixes include federated queries, analyzing data where it lives, and decoupling detection from ingestion so you never have to move or duplicate all your data.
The challenge is that correlation, state, timelines, and real-time detection require locality. Distributed query engines excel at ad-hoc exploration but are not substitutes for continuous detection pipelines. Federated queries introduce latency, inconsistent performance, and complexity every time you write a detection. Normalization deferred to query time pushes complexity into every rule. You do not eliminate cost, you shift it to unpredictable query execution and compute costs that spike precisely when incidents occur. Centralizing data isn’t a flaw; it is a tradeoff that supports correlation engines, summary indexes, entity timelines, and stateful detections that distributed query models struggle to maintain in real time. In fact, if the SIEM was to store the data in the customer’s S3 bucket, you can keep cost somewhat under control.
2. “SIEM pricing is broken because it charges by data volume”
A frequent refrain is that incumbent SIEMs penalize good security hygiene by tying pricing to ingestion volume, which becomes untenable as data grows. The proposed response is pricing models untethered from volume, open storage, and customer-controlled compute.
The challenge is that cost doesn’t vanish because you hide volume. Compute, memory, enrichment, retention, and query costs all remain. If pricing is detached from ingestion, it typically reappears as unpredictable query charges, usage tiers, or gated features. Volume is not an arbitrary metric; it correlates with the cost a vendor (or customer) incurs. Treating cost as orthogonal to data volume does not make it disappear; it just blinds you to a key cost driver. I have dealt with all the pricing models: by user, by device, by volume, … in the end I needed to make my gross margins work, guess who pays for that?
3. “SIEM detections are weak because they rely on bad rules”
New entrants commonly assert that traditional SIEM rules are noisy, static, and unable to keep up with modern threat techniques. Solutions offered include natural-language detections, detections-as-code, continuous evaluation, and AI-generated rules.
The challenge is that many of these still sit atop the same primitives. For example, SIGMA is widely used as a community detection language, but it is fundamentally limited: it is mostly single-event, cannot express event ordering or causality, has no native temporal abstractions or entity-centric modeling, and cannot natively express thresholds, rates, cardinality, or statistical baselines. Wrapping these limitations in AI or “natural language” does not change the underlying detection physics. You can improve workflow and authoring experience, but you do not fundamentally invent a new class of detection with the same primitives. And guess what, large vendors have pretty significant content teams – I mean detection engineering teams – often tied into their threat research labs. Don’t tell me that a startup has found a more cost effective and higher efficacy way to release detection rules. If that were the case, all these large vendors would be dumb to operate such large teams.
4. “SIEMs lack context, causing false positives”
The argument here is that existing SIEMs flood analysts with alert noise because they lack deep asset context, threat intelligence, or behavioral understanding. New entrants promise tightly integrated TI feeds, cloud context, or built-in behavior analytics.
Context integration has been a focus of incumbent platforms for years. The real hard problem is not accessing context but operationalizing it without drowning analysts. More feeds often mean more noise unless you have mature enrichment pipelines, entity resolution, and risk scoring built into rules that understand multi-stage attack sequences. Adding more sources does not automatically improve signal quality. The noise problem is as much about rule quality and use-case focus as it is about context availability. Apply the same argument here with regards to the quality of threat feeds that I outlined in the last item.
5. “AI-native SIEMs will finally fix detection and response”
Perhaps the most seductive claim is that incumbent SIEMs were built for a pre-AI world and that new platforms built with agentic AI at every layer will finally crack automation, detection, and investigation.
The challenge is that AI does not eliminate the need for structured, high-quality, normalized data, or explainability, or deterministic behavior in high-risk contexts. AI can accelerate workflows, assist with investigation, and suggest hypotheses, but it does not replace the need for precise, reproducible, and auditable detection logic. Most AI-native claims today are improvements in UX and speed, not architectural breakthroughs in detection theory.
The Uncomfortable Conclusion
VC money is flowing because SIEM is operationally hard, expensive, and often unpopular with SOC teams. There is real pain and real gaps, especially around cost transparency, scaling, and usability. But declaring existing SIEMs obsolete because they are imperfect is not a thesis; it is a marketing slogan.
The core assumptions driving this funding wave deserve scrutiny: centralization is treated as a flaw rather than a tradeoff necessary for continuous detection, pricing complaints get conflated with architectural insights, detection quality is blamed on tooling rather than operational realities, and AI is overstated as a panacea.
On the flip side, here are a couple of directions that should be looked at:
Some of the new entrant SIEMs actually make a dent. They are rebuilding their entire pipelines and storage architecture with modern technologies, not old paradigms. They have a clear advantage and don’t have to deal with millions of lines of tech debt. Using an agentic AI architecture could be quite interesting here.
As the AI SOC emerges – and maybe become a reality – we will probably see more and more MCP servers exposing infrastructure information that can be leveraged, from alerts to context to response capabilities. But we’ll need to see how data schemas and all that will evolve.
The one innovation that has already generated some returns for investors is the entire data pipeline world. Companies like Observo (I had the privilege to be an advisor) have truly added something useful to the SIEMs and as I argue in one of my previous blogs, needs to really become a capability baked into each SIEM out there.
Everyone is suddenly looking at MSP and MSSP rollups. Investors, strategics, even VCs. The logic is obvious. Fragmented market, recurring revenue, sticky customer relationships. But the reality is that only a small subset of providers actually operate at a level worth scaling. The difference between an average MSSP and a good one comes down to a few fundamentals.
Start With Focus
Most MSPs never defined who they serve. They grew organically, took whatever customer showed up, and built a toolkit around individual fires rather than a repeatable model. A strong MSSP starts with clarity. Who is the ICP. What problem is being solved. What the operating model looks like for that segment. When this is missing, everything becomes random. Different tools. Different service quality. No leverage.
In practice, the most important segmentation is not the MSP itself, but who the MSP sells to. An MSP serving restaurants or spas has a fundamentally different security maturity, willingness to pay, and regulatory exposure than one serving regional banks, healthcare, or regulated SMBs. Treating them as one market leads to mispriced risk and churn.
Understand the Economics
Many MSPs think software licensing is their main cost. It is not. Labor dominates the model. At ConnectWise, our Service Leadership dataset showed that roughly 20 percent of MSPs were not profitable because they simply did not understand their own cost structure. The best ones hit around 20 to 25 percent EBITDA. They standardize. They price correctly. They run the business with discipline instead of firefighting.
The real margin killer is not the license costs. It is the technician minutes required to install, manage, respond, document, and bill every tool. Every additional product increases operational drag, even if the license is cheap.
Standardized Security Bundles Win
The MSSPs that scale do not let customers choose their own adventure. They define a required stack. If you want to be a customer, you adopt their bundle. This gives consistency, predictability, and actual security outcomes. A typical bundle includes:
• Patch and vulnerability management • Endpoint protection • Email security • Security awareness • Optional SIEM or MDR depending on the segment
Without standardization, you cannot maintain margins or guarantee service quality. You also make incident response dramatically harder because every environment looks different.
In reality, the bundle is usually sold at a fixed price like $50 to $100 per user per month. Any new security tool must fit inside that number. If it costs $2 to $3 per user, something else must be removed or margin gets cut. This is why getting into the bundle is harder than most vendors expect.
Service Quality Is the Product
SMBs want to be secure. They want minimal disruption. And when something goes wrong, they want a real human who knows what they are doing. Not tier 1 scripts. Not delays during an active incident. Good MSSPs prepare the customer during onboarding. They map critical systems, define escalation paths, understand what can be taken offline, and capture credentials and architecture details. They remove the guesswork from the moment the incident starts.
Billing Needs To Be Simple
One of the fastest ways to lose customers is confusing invoices. Customers want to understand what they pay for. Surprises create distrust. The MSSPs that retain well keep billing predictable, transparent, and boring.
Own the Response, Not Just the Alert
An MDR or MSSP that only notifies customers creates frustration. The provider must take the customer through remediation. For SMBs, response often means restoring operations, identifying the entry point, and closing the gap. If the MSSP cannot do this internally, it must have reliable partners.
How Rollups Actually Create Value
Rollups only work when there is a clear thesis. Some focus on platform unification and a single delivery model. Others focus on professionalizing the business with better hiring, benefits, pricing, and operational rigor. Both paths can work. But they require patience and real operating muscle.
The fastest way to build a defensible platform is often not direct MS(S)P sales but embedding into existing security vendors that already sit in the bundle. Winning a technology alliance with an EDR, MDR, or firewall provider puts you into hundreds of MSPs without forcing each of them to make a new buying decision
Cross border rollups in Europe introduce more complexity. Language and local relationships matter. Regulation varies. Centralizing delivery is possible, but customer interaction often stays local. A standardized platform can still work if the ICP is consistent across regions.
The Microsoft Factor
Many SMBs already own security features through M365. Ignoring this leads to bloated stacks and poor pricing. Smart MSSPs align their offering with what customers already have and fill the real gaps.
The Bottom Line
Building a strong MSSP is not mysterious. It requires a defined ICP, a standardized security bundle, disciplined delivery, true incident readiness, transparent billing, and the ability to take customers all the way to resolution. The providers that do these things consistently are the ones worth scaling. Investors often chase the rollup story, but the real value sits inside the boring operational fundamentals that most of the market never gets right.
Every few years in security, a category shows up that makes you think: “This market should have never existed.”
The “security data pipeline / data fabric / routing” universe is exactly that. Impressive companies in the space, smart founders, great execution and (thank you Observo!) great exists already. But the fact that there is a market here is the real indictment. This category is nothing more than a gap SIEM vendors left wide open. And the pipelines walked right in.
A Market That Shouldn’t Exist
Let’s be honest: Splunk, Elastic, Sentinel, Exabeam… they all ignored the ingest problem for too long. Cost, routing, shaping, tiering — none of it was solved cleanly. So Cribl et al solved it for them. But here’s the twist. By solving it, they also became the neutral abstraction layer. The thing sitting between customers and their SIEM. That layer is now the switching fabric. It isn’t just “optimize your Splunk bill.” It’s:
Reduce SIEM ingestion.
Store everything in our “cheap” data lake.
Oh, and here’s some lightweight analytics while you’re here.
Or how about you go ahead and try out another SIEM? We can easily forward your data to multiple places while you evaluate moving away and then switching in a matter of hours.
That’s the Trojan Horse. You invite it in to help. And suddenly it controls the keys to the castle.
History Is Repeating Itself
We’ve seen this play before:
UEBA -> first standalone products slowly morphed into adding data stores, analytics, and then became full SIEMs
SOAR -> got absorbed into SIEM
Ingest pipelines -> now becoming lakes -> and eventually a SIEM
Cribl already has Cribl Lake. Give it time and it becomes a SIEM-lite. Then a SIEM.
This is the cycle: Start as an add-on -> become indispensable -> become the platform.
We keep acting surprised. But it’s the same movie every time. And again, keep thinking about the switching costs. This layer enables every customer to easily evaluate new solutions and switch over fairly easily.
If You’re Splunk… I mean Cisco…
You’re one of the few players that can still turn this around — if you execute sharply and fast.
Here’s what Splunk must own again:
Reclaim the ingest pipeline.
Make cost the advantage, not the penalty.
Federate search across data lakes natively. (I think you are almost there)
Make tiering and reduction a first-class feature.
Kill the routing layer through pure convenience.
Figure out your real-time story. Crowdstrike is going strong on messaging how fast attackers act these days and a batch approach won’t work anymore.
If Splunk doesn’t own the control plane, Cribl will. And once you lose the control plane, you lose the customer. No matter how good your detection content is. Cisco gives Splunk a rare opportunity: distribution, integration leverage, and a chance to fix what was ignored for too long. But they can’t let another category grow unchecked. Not again.
My Take
Data pipeline products aren’t the problem. They are the symptom.
The problem is the complacency that let the ingest layer drift outside the SIEM in the first place. Because once a neutral fabric handles all your data, the SIEM becomes swappable. The next SIEM won’t start as a SIEM. It will start exactly where Cribl started; as a pipeline (Abstract Security, anyone)? That’s the Trojan Horse.