April 9, 2026

AI SOC and SIEM Are Being Repriced

Category: Security Market — Tags: , , – @ 7:56 am

One of the more interesting messages going into RSA was not just that AI is reshaping security. It was that the market is changing what it rewards. I had the pleasure of attending the Piper Sandler investment day on Monday at RSA, one of my favorite events where I get to catch up with many friends, meet new security leaders and get an update on the security market conditions.

The market for cyber security companies last year was easier: grow fast, expand inside the account, add modules, and let NRR do the talking. The new story looks different:

  • ARR growth expectations have come down from 50% to 30%
  • Gross margin expectations have moved up from 75% to 80%
  • NRR expectations have come down from 120% to 115%
  • GRR expectations have moved up from 88% to 92%
  • Burn multiple expectations have tightened from 1.5x to <1.0x

That may sound like a generic software market shift. I do not think it is. I think it has very specific implications for AI SOC and SIEM. More about that later.

Capital markets changed first

The broader market backdrop matters. Security is still one of the more attractive areas in software, but it is being valued inside a much harder capital markets environment. The IPO window remains narrow. Liquidity for scaled assets is limited. Growth is decelerating across software. And AI is compressing valuations by making forward revenue less credible and product durability more important. That combination changes the conversation from upside to survivability.

Security is still attractive, but the bar is higher

That is why the security market now feels bifurcated. On one side, it still benefits from strong structural demand: geopolitical uncertainty, expanding attack surfaces, and AI itself creating new categories of spend. On the other side, investors are becoming much less willing to underwrite broad TAM stories, multi-year expansion narratives, or “we will grow into the model” margin profiles. Security remains attractive, but the bar is higher.

Private equity has a liquidity problem

Private equity is caught in that tension as well. Large assets are staying private longer because the public market is not offering a clean exit path. That creates pressure on hold periods, return profiles, and liquidity planning. More firms will need to create liquidity through secondaries, continuation vehicles, and other forms of fund-to-fund reshuffling rather than relying on traditional exits. That is not a theoretical issue. It shapes what kinds of assets still look financeable, what kinds of stories buyers will believe, and how aggressively firms can keep marking winners.

M&A gets more strategic from here

At the same time, strategic logic is getting stronger. Large-scale M&A should remain active because buyers still want growth, but they increasingly want growth that is accretive, platform-relevant, and commercially durable. The market is likely to reward scaled platforms, integrated environments, and assets that can either deepen data advantages or simplify the stack. It is likely to punish products that still depend on expensive customer education, loose positioning, or heroic expansion assumptions.

AI increases both risk and defensibility

AI only sharpens that divide. In security, AI is both a disruption risk and a source of defensibility. It creates fear around older architectures and weaker product moats, but it also increases the value of proprietary telemetry, embedded distribution, and control points across the enterprise. The winners are less likely to be those with the loudest AI messaging and more likely to be those with the strongest combination of data, workflow ownership, and commercial leverage.

Platformization is really about data gravity

That is also why platformization matters so much right now. This is not just a consolidation story. It is a data gravity story. The vendor that sees more telemetry, sits in more workflows, and becomes harder to dislodge can improve models faster, distribute new capabilities faster, and defend retention more effectively. In a market that now cares more about GRR, margins, and burn discipline, that matters a lot.

What this means for AI SOC and SIEM

This is where the implications for SIEM and AI SOC come into focus. The category is seeing real pressure from both sides: incumbent platforms facing pricing and architectural questions, and newer entrants offering better workflows, AI-native interfaces, and more agentic operating models. But the long-term winners may not be the vendors with the sharpest demo. They may be the ones that combine durable retention, meaningful use-cases, security outcomes, and enough platform surface area to remain central as the security stack becomes more automated and more agent-driven.

Source: Piper Sandler Keynote Deck

April 6, 2026

AI Is Becoming a Company Operating System Layer

During my engagements with various Private Equity and Venture Capital outlets, I see a clear shift. The questions that is showing up more and more in due diligence is no longer, “What is your AI strategy?”

It is: “How far along are you in rebuilding the company around AI?”

That is a different question.

It applies to startups and incumbents alike. It applies to security companies, SaaS vendors, MSPs, and a lot of businesses outside those markets too. The point is no longer to add a few AI features, automate one workflow, or give employees access to a chatbot. The point is to rethink how the company actually operates.

AI Should Sit Under Every Corporate Function

The companies that will look strongest over the next few years are the ones treating AI as an operating system layer across the business.

That means product development, service delivery, sales and marketing, customer success, and finance and operations are all being reworked with AI in mind. Not as separate experiments, but as connected systems.

The important shift is not “where can we use AI?” It is “how should this function work if AI is built into the process from the start?”

That usually leads to a broader redesign. Workflows get compressed. Handoffs change. Data gets linked across teams. Software that used to just record tasks between humans starts becoming an orchestration layer between people and AI agents.

AI on Top Is Not Enough

Most companies are still treating AI like a feature layer. They add a copilot. They automate a few tasks. They run a few pilots in sales or support. Then they talk as if they have become an AI company. They have not.

If AI is going to matter as much as people claim, then it cannot live in isolated tools and side projects. It has to sit underneath the company as an operating layer. Product, service delivery, sales, marketing, customer success, finance, and operations all need to be rethought with AI built in from the start.

That is the real shift. Not AI as garnish. AI as infrastructure.

In practice, this means a company’s core operating logic can no longer live in forgotten decks, static docs, and tribal memory. Vision, mission, strategic priorities, ICP, and go-to-market motions need to be embedded into the AI layer itself so teams can interact with them in daily work (literally let them chat with these pieces of information via Slack!). The system should be able to explain the strategy, test whether execution matches it, and keep the company aligned as it changes. If that layer does not exist, most companies are still operating on fragments.

This Is Now a Capital Question

This is also why the conversation is changing in private equity and VC diligence.

We are not just looking for AI messaging anymore. We are looking for evidence that the operating model is changing. Is the company shipping faster? Is service delivery getting more leverage? Are teams linked better? Is software being used to orchestrate work between humans and AI agents rather than just record tasks? Is management actually rebuilding the business, or are they still presenting AI as an add-on?

Those questions now matter directly to competitiveness.

A company that keeps the old operating model and bolts AI on top will lose to one that rebuilds around it properly. The latter will move faster, learn faster, and eventually operate at a different level of efficiency.

The Companies That Wait Will Pay For It

I think this is becoming a funding imperative.

Before raising capital, before pursuing a sale, and before the board forces the discussion, management teams need to be doing the hard work of redesigning the company around AI.

Because the market is not going to wait for slow adopters to get comfortable. The companies that embrace AI as a true operating layer will look more scalable, more durable, and more investable. The ones that do not will increasingly look like they are running yesterday’s model in a market that has already moved on.

April 2, 2026

If AI Becomes the User, What Happens to the SIEM?

RSAC 2026 made one thing very clear to me: the market is moving fast, but it is still deeply confused. The big announcements from Google, Splunk, and Databricks all point in the same direction. Security operations are becoming more agentic, more API-driven, and more automated. But most of the category still looks crowded, early, and only lightly differentiated.

The interesting part is not that everybody now has an AI story. It is where the pressure is landing: attack speed, active response, and the possibility that AI itself becomes the primary user of the security stack.

TL;DR

  • Attacks are now fast enough that human-speed response is no longer a sufficient default.
  • That will push the market toward active response, which is useful but also dangerous if the control logic is not deterministic enough.
  • Most AI SOC vendors still sound similar because many of them sit on top of existing SIEMs and alert streams rather than changing the underlying detection or data architecture.
  • The big SIEM vendors are moving, and one major EDR/SIEM vendor is expanding AI security into on-prem and sovereign environments.
  • If AI becomes the user of security products, the UI matters less, the API matters more, and the economics of expensive SIEM platforms get harder to defend.

Attacks are getting faster

This is the part of the market I think people are still underestimating. CrowdStrike’s 2026 threat report says the average eCrime breakout time dropped to 29 minutes in 2025, and the fastest case it observed was 27 seconds. Databricks used its Lakewatch announcement to make a related point from the vulnerability side, citing research that mean time to exploit has fallen from 23.2 days in 2025 to 1.6 days in 2026.

That changes what matters in the SOC. A lot of SIEM workflows still assume there is time to search, enrich, discuss, and decide. That model was already strained. It gets worse when attacks speed up and when the adversary is using AI to compress its own loop. Search still matters, but a search-centric operating model is not enough if the environment can be compromised end to end in under an hour.

The obvious answer is more active response. The problem is that this is where things get dangerous. If teams start handing more containment and remediation decisions to AI before the systems are ready, we are going to see more self-inflicted outages. The market is moving there anyway, because the alternative is to keep defending at human speed against machine-speed attacks. SOAR was supposed to close part of that gap and clearly did not.

AI SOC is still confusing and mostly sounds the same

That was probably my main emotional reaction leaving RSAC: confusion. There were simply too many vendors with very similar messaging. RSAC says the conference had more than 600 exhibitors this year. I could not independently validate an exact count of 36 AI SOC vendors from public RSAC data, but “roughly three dozen” felt directionally right from the floor, and many of them sounded remarkably similar.

The common pitch was familiar: reduce alerts, triage faster, investigate faster, give the analyst a copilot, automate parts of response. Some of that is clearly useful. But a lot of it still feels like a layer on top of the existing SIEM rather than a rethink of the detection stack itself. If the AI mostly sits on top of alert streams coming out of a legacy backend, then it may improve analyst productivity without materially fixing false negatives, brittle detections, or poor data design upstream.

That is also why I do not think most of this market is really using LLMs in a deep way yet. In most cases, the models are being used for triage, recommendations, summarization, and analyst assistance. That is very different from using LLMs for real detection, broader SOC operations, or meaningful changes to the underlying architecture.

For a more complete framework of where AI SOC and SIEM should be heading, see raffy.ch/SIEM.

That is why so much of the category feels undifferentiated. The interfaces are different, the branding is different, and the demo flows are different, but the center of gravity often looks the same. The latest platform announcements only reinforce that point. If the platform owner adds the agentic layer too, the vendors sitting on top of Chronicle, Splunk, or similar platforms have a much harder moat to defend.

The architecture is shifting

By this point, the vendor movement is established. The more interesting question now is what it does to architecture. SentinelOne adds another signal here by pushing more AI security capability into on-prem, sovereign, and air-gapped environments.

Put together, that points to a broader market shift. Storage matters more. Data routing matters more. Sovereignty and local control matter more. Cheap data lakes, strong analytics layers, and flexible orchestration matter more. Traditional SIEM UI matters less than it used to, and that matters not just for SIEM vendors but also for MDRs that differentiated by putting an AI layer on top of someone else’s backend.

That is also why Splunk’s cost model keeps coming back into the conversation. Splunk is powerful and mature, but if the agent becomes the main consumer of the system, customers start asking a different question: am I paying for the analytics engine, or am I paying for UI, workflow, and operating complexity that an agent increasingly does not care about?

If AI becomes the user, the stack changes

The most important implication may be economic, not just operational. Security products were built for human analysts. The value lived in the UI, the workflow, the search language, the dashboard, and the services needed to make all of that usable. But what happens if the real user becomes Claude Code, Codex, Gemini, or some internal agent instrumented across the entire security stack? Daniel Miessler has been arguing that companies and products increasingly become APIs. Security looks like one of the clearest versions of that shift.

In that world, every product starts to look more like an API than an application. That is exactly where the recent announcements are heading. LimaCharlie’s new lc-soc release is a concrete implementation of the same idea: an open-source “agentic SOC as code” where AI agents are coordinated through the cases system and D&R rules, then deployed and versioned like infrastructure.

If AI becomes the primary user, the UI does not disappear, but it stops being the center of gravity. The agent does not care about your console. It cares about whether the data is accessible, whether the schema is consistent, whether the analytics layer is fast, whether the permissions model is clean, and whether the actions are safe to orchestrate.

That creates real pressure on expensive SIEM economics. If the agent can query multiple tools directly, the premium attached to a deeply monetized UI gets harder to justify. The market may move toward something simpler: cheap storage, a strong analytics layer, and an orchestration layer on top. That does not mean incumbents disappear. It means their value proposition changes. If AI becomes the user, the winners may be the vendors with the best APIs, control points, and data access model.

Evals become part of the control layer

The next problem is trust and determinism. Once you push AI beyond triage and recommendations and let it make or recommend more consequential changes, you need a way to keep the system reliable. That is where eval loops come in.

I heard Josh Saxe make this point at RSAC in the context of AI-first infrastructure management: if agents are going to make changes in live systems, you need strong evaluation around them to keep behavior bounded and repeatable enough to trust. I think the same logic applies directly to security operations. The market is moving toward active response, but the models themselves were not built around strict determinism.

That means the answer is not blind autonomy. It is more likely a layered system where adaptive AI sits inside clearer control boundaries, with evals, policy, and deterministic automation around it. Evals stop being an AI engineering detail and become part of the security control layer itself.