January 16, 2026

How AI Impacts the Cyber Market and The Future of SIEM

Security has always moved in waves. Not because we suddenly get smarter, but because we learn from past mistakes, identify gaps, hit limits, need to protect new technologies, and then go and do our best to solve those new security challenges with the technologies at hand.

The era of AI (let’s be clear, we have had AI for a long time; what I mean specifically is the advent of Large Language Models) has shifted many industries, but specifically security in a particularly revealing way. AI did not just give us new tools to solve security problems. It invited innovators and entrepreneurs to revisit pretty much every security technology to see if LLMs could be useful to address some of the existing challenges. But that’s not where things stopped. More interestingly, some teams used this moment to question whether the underlying approaches themselves still made sense at all. Not just whether LLMs could help, but whether modern data architectures, different telemetry choices, and different enforcement models could fundamentally change outcomes.

That is what has triggered a real wave of new companies in cyber, including across markets that many considered mature, or even stagnant, like SIEM.

The Five Phases We Just Lived Through

Let’s take a non-scientific look at how major security approaches evolved over the past 25 years. This is not exhaustive, but it helps explain where we are today.

1. Network-Centric Prevention

Back, many moons ago, we started with firewalls, IDS, and later IPS. The model was simple. Look at packets. Stop bad things. It worked until attackers learned to look normal.

2. More Data, Centralized, Higher-Level Insights

When network telemetry created too many false positives, we added vulnerability data and authentication events and fed them into a SIEM to correlate. The results were “mixed”. Fortunately for the SIEM market, compliance and audit requirements emerged, mandating long-term log retention. This gave SIEM a durable justification, even when its security value was debated. SIEM became indispensable for visibility and forensics, but increasingly disconnected from real-time decision making.

3. Back to Prevention and Response

As SIEM alert volumes exploded and analysts could not keep up, the industry pivoted. EDR. NDR. SOAR. We all know how that played out. NDR never truly broke out. EDR became a major category. SOAR largely collapsed back into SIEM. And eventually, most large EDR vendors added a SIEM to their portfolio.

This was not convergence by design. It was convergence driven by operational gravity.

4. AI Triggers a Reality Check

LLMs made many believe they could simply layer AI on top of broken architectures. Some startups did exactly that. They will likely not be the long-term winners.

The more interesting group of companies used AI as a forcing function to re-examine first principles. What data actually matters? What can realistically be prevented at the edge? What must still be correlated centrally? What is structurally broken in SOC workflows? Where have we been compensating for bad architecture with human labor? Crucially, many of these answers have little to do with LLMs themselves, and much more to do with data fidelity, placement of control, and modern system design.
This is where the real innovation is happening.

5. The Convergence

We are now in a phase where prevention is moving back to the edge, while analytics and orchestration remain central. Endpoints are smarter. Browsers are instrumented. Networks are being re-observed. Context is finally treated as a first-class input.

But there is still a SOC. There is still a central nervous system that correlates, reconstructs, explains, orchestrates, and proves what happened. Call it SIEM, security analytics, XDR, or AI SOC. The name is irrelevant. The function is not.

In parallel, we are realizing that we can push enforcement / prevention back to the edge. Wherever we have enough information, execute at the edge. Where we don’t, call out to your central nervous system. To your brain. The brain (your SIEM) that understands at any moment in time, what the risk and function is of every entity in your network. And use that information for decision making.

Why AI SOC Will Collapse Back Into SIEM

Many startups brand themselves as “AI SOC”. What do they actually do?

They primarily ingest alerts from EDR, NDR, SIEMs, and cloud platforms, then attempt to determine which ones matter. They add context, apply behavioral analysis, and suppress false positives.

In other words, they attempt to do what SIEM, UEBA, and SOAR were always supposed to do, just with better math and more compute. However, there is one problem. Many of the AI SOC contenders operate on alert streams. That means they start from already lossy, opinionated data. Real behavioral analysis does not on top of alert streams. It lives in raw telemetry. Email flows. Network sessions. Browser actions. Endpoint system behavior.

Once an AI SOC platform decides to ingest that raw data directly, it immediately recreates the ingestion, normalization, storage, and correlation problems that SIEM already exists to solve. At that point, the separation no longer makes sense. This is exactly why UEBA and SOAR collapsed back into SIEM. And it is why AI SOC will do the same.

There will be one place where data is reconciled, correlated, and turned into decisions. That place will increasingly run on federated, near-real-time architectures rather than twenty-year-old indexing engines. But their function remains the same. Call it whatever you want. It needs to be one system, not many and it doesn’t care what you call it.

The Shift Is Not Just Technical. It Is Organizational.

What is interesting to note about these new entrants in the SIEM or security analytics space is not just their security architecture. It is the company architecture. Modern security startups are being built on AI-native operating systems: Sales calls are captured and analyzed, not just by sales, but product teams mine them for competitive signals, marketing uses them to refine messaging, engineering uses them to prioritize roadmaps. This is not a tooling upgrade. It is a fundamentally different operating model.

Imagine a system where the vision, mission, strategy, and priorities are centrally maintained, updated and codified. Every function consumes that shared intelligence to drive decisions, messaging, and execution. This does not just improve alignment. It dramatically compresses learning cycles and execution speed. And that, more than any individual feature, may be the hardest thing for incumbents to replicate.

December 17, 2025

Why Venture Capital Is Betting Against Traditional SIEMs

And why most of the arguments do not hold up under scrutiny

Over the past 18 to 24 months, venture capital has flowed into a fresh wave of SIEM challengers including Vega (which raised $65M in seed and Series A at a ~$400M valuation), Perpetual Systems, RunReveal, Iceguard, Sekoia, Cybersift, Ziggiz, and Abstract Security, all pitching themselves as the next generation of security analytics. What unites them is not just funding but a shared narrative that incumbent SIEMs are fundamentally broken: too costly, too siloed, too hard to scale, and too ineffective in the face of modern data volumes and AI-driven threats.

This post does not belabor each startup’s product. Instead it abstracts the shared assertions that justify recent funding and then stresses them to see which hold up under scrutiny. I am not defending incumbents. I am trying to separate real gaps from marketing (and funding) narratives.

The “SIEM is Broken” Narrative

A commonly cited industry report claimed that major SIEM tools cover only about 19% of MITRE ATT&CK techniques despite having access to data that could cover ~87%. That statistic is technically interesting but also deeply misleading: ATT&CK technique coverage is not an operational measure of detection quality or effectiveness, it primarily reflects rule inventory and tuning effort. Nevertheless, it has become a core justification for the “SIEM is obsolete” narrative. I wasn’t able to find the original report to validate what and how they tested, but I have seen SIEMs that cover much more and have big detection teams taking care of these issues.

The Five Core Claims Driving the Market Thesis

Across decks, interviews, and marketing copy, I picked five recurring themes that define what these companies think incumbents get wrong and what investors are underwriting as the path forward.

1. “Centralized SIEM architectures no longer scale”

The claim is that forcing security telemetry into a centralized repository is too expensive and too slow for modern enterprises generating terabytes of logs every day. The proposed fixes include federated queries, analyzing data where it lives, and decoupling detection from ingestion so you never have to move or duplicate all your data.

The challenge is that correlation, state, timelines, and real-time detection require locality. Distributed query engines excel at ad-hoc exploration but are not substitutes for continuous detection pipelines. Federated queries introduce latency, inconsistent performance, and complexity every time you write a detection. Normalization deferred to query time pushes complexity into every rule. You do not eliminate cost, you shift it to unpredictable query execution and compute costs that spike precisely when incidents occur. Centralizing data isn’t a flaw; it is a tradeoff that supports correlation engines, summary indexes, entity timelines, and stateful detections that distributed query models struggle to maintain in real time. In fact, if the SIEM was to store the data in the customer’s S3 bucket, you can keep cost somewhat under control.

2. “SIEM pricing is broken because it charges by data volume”

A frequent refrain is that incumbent SIEMs penalize good security hygiene by tying pricing to ingestion volume, which becomes untenable as data grows. The proposed response is pricing models untethered from volume, open storage, and customer-controlled compute.

The challenge is that cost doesn’t vanish because you hide volume. Compute, memory, enrichment, retention, and query costs all remain. If pricing is detached from ingestion, it typically reappears as unpredictable query charges, usage tiers, or gated features. Volume is not an arbitrary metric; it correlates with the cost a vendor (or customer) incurs. Treating cost as orthogonal to data volume does not make it disappear; it just blinds you to a key cost driver. I have dealt with all the pricing models: by user, by device, by volume, … in the end I needed to make my gross margins work, guess who pays for that?

3. “SIEM detections are weak because they rely on bad rules”

New entrants commonly assert that traditional SIEM rules are noisy, static, and unable to keep up with modern threat techniques. Solutions offered include natural-language detections, detections-as-code, continuous evaluation, and AI-generated rules.

The challenge is that many of these still sit atop the same primitives. For example, SIGMA is widely used as a community detection language, but it is fundamentally limited: it is mostly single-event, cannot express event ordering or causality, has no native temporal abstractions or entity-centric modeling, and cannot natively express thresholds, rates, cardinality, or statistical baselines. Wrapping these limitations in AI or “natural language” does not change the underlying detection physics. You can improve workflow and authoring experience, but you do not fundamentally invent a new class of detection with the same primitives. And guess what, large vendors have pretty significant content teams – I mean detection engineering teams – often tied into their threat research labs. Don’t tell me that a startup has found a more cost effective and higher efficacy way to release detection rules. If that were the case, all these large vendors would be dumb to operate such large teams.

4. “SIEMs lack context, causing false positives”

The argument here is that existing SIEMs flood analysts with alert noise because they lack deep asset context, threat intelligence, or behavioral understanding. New entrants promise tightly integrated TI feeds, cloud context, or built-in behavior analytics.

Context integration has been a focus of incumbent platforms for years. The real hard problem is not accessing context but operationalizing it without drowning analysts. More feeds often mean more noise unless you have mature enrichment pipelines, entity resolution, and risk scoring built into rules that understand multi-stage attack sequences. Adding more sources does not automatically improve signal quality. The noise problem is as much about rule quality and use-case focus as it is about context availability. Apply the same argument here with regards to the quality of threat feeds that I outlined in the last item.

5. “AI-native SIEMs will finally fix detection and response”

Perhaps the most seductive claim is that incumbent SIEMs were built for a pre-AI world and that new platforms built with agentic AI at every layer will finally crack automation, detection, and investigation.

The challenge is that AI does not eliminate the need for structured, high-quality, normalized data, or explainability, or deterministic behavior in high-risk contexts. AI can accelerate workflows, assist with investigation, and suggest hypotheses, but it does not replace the need for precise, reproducible, and auditable detection logic. Most AI-native claims today are improvements in UX and speed, not architectural breakthroughs in detection theory.

The Uncomfortable Conclusion

VC money is flowing because SIEM is operationally hard, expensive, and often unpopular with SOC teams. There is real pain and real gaps, especially around cost transparency, scaling, and usability. But declaring existing SIEMs obsolete because they are imperfect is not a thesis; it is a marketing slogan.

The core assumptions driving this funding wave deserve scrutiny: centralization is treated as a flaw rather than a tradeoff necessary for continuous detection, pricing complaints get conflated with architectural insights, detection quality is blamed on tooling rather than operational realities, and AI is overstated as a panacea.

On the flip side, here are a couple of directions that should be looked at:

  1. Some of the new entrant SIEMs actually make a dent. They are rebuilding their entire pipelines and storage architecture with modern technologies, not old paradigms. They have a clear advantage and don’t have to deal with millions of lines of tech debt. Using an agentic AI architecture could be quite interesting here.
  2. As the AI SOC emerges – and maybe become a reality – we will probably see more and more MCP servers exposing infrastructure information that can be leveraged, from alerts to context to response capabilities. But we’ll need to see how data schemas and all that will evolve.
  3. The one innovation that has already generated some returns for investors is the entire data pipeline world. Companies like Observo (I had the privilege to be an advisor) have truly added something useful to the SIEMs and as I argue in one of my previous blogs, needs to really become a capability baked into each SIEM out there.

Thanks for the feedback, Jesse!

December 5, 2025

What It Really Takes To Build A Good MSSP

Category: Community, Go To Market, Security Market — Tags: – @ 7:33 am

Everyone is suddenly looking at MSP and MSSP rollups. Investors, strategics, even VCs. The logic is obvious. Fragmented market, recurring revenue, sticky customer relationships. But the reality is that only a small subset of providers actually operate at a level worth scaling. The difference between an average MSSP and a good one comes down to a few fundamentals.

Start With Focus

Most MSPs never defined who they serve. They grew organically, took whatever customer showed up, and built a toolkit around individual fires rather than a repeatable model. A strong MSSP starts with clarity. Who is the ICP. What problem is being solved. What the operating model looks like for that segment. When this is missing, everything becomes random. Different tools. Different service quality. No leverage.

In practice, the most important segmentation is not the MSP itself, but who the MSP sells to. An MSP serving restaurants or spas has a fundamentally different security maturity, willingness to pay, and regulatory exposure than one serving regional banks, healthcare, or regulated SMBs. Treating them as one market leads to mispriced risk and churn.

Understand the Economics

Many MSPs think software licensing is their main cost. It is not. Labor dominates the model. At ConnectWise, our Service Leadership dataset showed that roughly 20 percent of MSPs were not profitable because they simply did not understand their own cost structure. The best ones hit around 20 to 25 percent EBITDA. They standardize. They price correctly. They run the business with discipline instead of firefighting.

The real margin killer is not the license costs. It is the technician minutes required to install, manage, respond, document, and bill every tool. Every additional product increases operational drag, even if the license is cheap.

Standardized Security Bundles Win

The MSSPs that scale do not let customers choose their own adventure. They define a required stack. If you want to be a customer, you adopt their bundle. This gives consistency, predictability, and actual security outcomes. A typical bundle includes:

• Patch and vulnerability management
• Endpoint protection
• Email security
• Security awareness
• Optional SIEM or MDR depending on the segment

Without standardization, you cannot maintain margins or guarantee service quality. You also make incident response dramatically harder because every environment looks different.

In reality, the bundle is usually sold at a fixed price like $50 to $100 per user per month. Any new security tool must fit inside that number. If it costs $2 to $3 per user, something else must be removed or margin gets cut. This is why getting into the bundle is harder than most vendors expect.

Service Quality Is the Product

SMBs want to be secure. They want minimal disruption. And when something goes wrong, they want a real human who knows what they are doing. Not tier 1 scripts. Not delays during an active incident. Good MSSPs prepare the customer during onboarding. They map critical systems, define escalation paths, understand what can be taken offline, and capture credentials and architecture details. They remove the guesswork from the moment the incident starts.

Billing Needs To Be Simple

One of the fastest ways to lose customers is confusing invoices. Customers want to understand what they pay for. Surprises create distrust. The MSSPs that retain well keep billing predictable, transparent, and boring.

Own the Response, Not Just the Alert

An MDR or MSSP that only notifies customers creates frustration. The provider must take the customer through remediation. For SMBs, response often means restoring operations, identifying the entry point, and closing the gap. If the MSSP cannot do this internally, it must have reliable partners.

How Rollups Actually Create Value

Rollups only work when there is a clear thesis. Some focus on platform unification and a single delivery model. Others focus on professionalizing the business with better hiring, benefits, pricing, and operational rigor. Both paths can work. But they require patience and real operating muscle.

The fastest way to build a defensible platform is often not direct MS(S)P sales but embedding into existing security vendors that already sit in the bundle. Winning a technology alliance with an EDR, MDR, or firewall provider puts you into hundreds of MSPs without forcing each of them to make a new buying decision

Cross border rollups in Europe introduce more complexity. Language and local relationships matter. Regulation varies. Centralizing delivery is possible, but customer interaction often stays local. A standardized platform can still work if the ICP is consistent across regions.

The Microsoft Factor

Many SMBs already own security features through M365. Ignoring this leads to bloated stacks and poor pricing. Smart MSSPs align their offering with what customers already have and fill the real gaps.

The Bottom Line

Building a strong MSSP is not mysterious. It requires a defined ICP, a standardized security bundle, disciplined delivery, true incident readiness, transparent billing, and the ability to take customers all the way to resolution. The providers that do these things consistently are the ones worth scaling. Investors often chase the rollup story, but the real value sits inside the boring operational fundamentals that most of the market never gets right.

December 3, 2025

The Trojan Horse We Let Into the SIEM Kingdom

Category: Log Analysis, Security Information Management — Tags: – @ 1:47 pm

Every few years in security, a category shows up that makes you think: “This market should have never existed.”

The “security data pipeline / data fabric / routing” universe is exactly that. Impressive companies in the space, smart founders, great execution and (thank you Observo!) great exists already. But the fact that there is a market here is the real indictment. This category is nothing more than a gap SIEM vendors left wide open. And the pipelines walked right in.

A Market That Shouldn’t Exist

Let’s be honest: Splunk, Elastic, Sentinel, Exabeam… they all ignored the ingest problem for too long. Cost, routing, shaping, tiering — none of it was solved cleanly. So Cribl et al solved it for them. But here’s the twist. By solving it, they also became the neutral abstraction layer. The thing sitting between customers and their SIEM. That layer is now the switching fabric. It isn’t just “optimize your Splunk bill.”
It’s:

  • Reduce SIEM ingestion.
  • Store everything in our “cheap” data lake.
  • Oh, and here’s some lightweight analytics while you’re here.
  • Or how about you go ahead and try out another SIEM? We can easily forward your data to multiple places while you evaluate moving away and then switching in a matter of hours.

That’s the Trojan Horse. You invite it in to help. And suddenly it controls the keys to the castle.

History Is Repeating Itself

We’ve seen this play before:

  • UEBA -> first standalone products slowly morphed into adding data stores, analytics, and then became full SIEMs
  • SOAR -> got absorbed into SIEM
  • Ingest pipelines -> now becoming lakes -> and eventually a SIEM

Cribl already has Cribl Lake. Give it time and it becomes a SIEM-lite. Then a SIEM.

This is the cycle: Start as an add-on -> become indispensable -> become the platform.

We keep acting surprised. But it’s the same movie every time. And again, keep thinking about the switching costs. This layer enables every customer to easily evaluate new solutions and switch over fairly easily.

If You’re Splunk… I mean Cisco…

You’re one of the few players that can still turn this around — if you execute sharply and fast.

Here’s what Splunk must own again:

  • Reclaim the ingest pipeline.
  • Make cost the advantage, not the penalty.
  • Federate search across data lakes natively. (I think you are almost there)
  • Make tiering and reduction a first-class feature.
  • Kill the routing layer through pure convenience.
  • Figure out your real-time story. Crowdstrike is going strong on messaging how fast attackers act these days and a batch approach won’t work anymore.

If Splunk doesn’t own the control plane, Cribl will. And once you lose the control plane, you lose the customer. No matter how good your detection content is. Cisco gives Splunk a rare opportunity: distribution, integration leverage, and a chance to fix what was ignored for too long. But they can’t let another category grow unchecked. Not again.

My Take

Data pipeline products aren’t the problem. They are the symptom.

The problem is the complacency that let the ingest layer drift outside the SIEM in the first place. Because once a neutral fabric handles all your data, the SIEM becomes swappable. The next SIEM won’t start as a SIEM. It will start exactly where Cribl started; as a pipeline (Abstract Security, anyone)? That’s the Trojan Horse.

November 21, 2025

Security Is Fragmenting and Converging at the Same Time — Insights from the Field

Category: Investment, Security Market — @ 2:29 pm

Over the past weeks, I’ve had a series of conversations across the cybersecurity ecosystem. Founders in early-stage security startups, VC firms exploring new segments, PE groups accelerating roll-ups, MSP leaders navigating change, and friends pushing the boundaries of what AI can do.

Individually, each conversation was fascinating. Taken together, they paint a picture of where the industry is heading — and where the real opportunities are emerging.

1. Network Security Isn’t Dead at All

One of the more surprising conversations was with a founder building something genuinely innovative in network security. For years, many assumed the category had settled — but the reality is that architectures, workloads, and adversaries continue to evolve. Even the DDoS and WAF spaces are not dead. To my surprise when I worked with one of the PEs to look at the space in more detail again.

The lesson: even “mature” markets have seams where real innovation can take hold.

2. The MSP Landscape Is Vast — and Misunderstood

I spoke with a VC firm considering deeper investments in the MSP ecosystem. There’s real opportunity, but also complexity that outsiders often underestimate:

  • Segmentation
  • Pricing mechanics
  • Packaged offerings
  • Integrations into broader ecosystems
  • and perhaps most importantly, helping MSPs actually sell security

Products don’t win in MSP without empathy for how MSPs operate and make money.

3. PE Roll-Ups Are Accelerating

One PE firm I talked to is running hard at the roll-up opportunity as the first generation of MSP founders, many starting in the late 90s, look to exit. Their playbook is all around optimized processes and joint buying power. While a European firm I am in touch with, is exploring consolidation not just for scale, but under a unified security platform strategy.

Two very different visions, both valid.

4. Connecting Leaders Amplifies Outcomes

A conversation with a European PE group was refreshing — they emphasize connecting portfolio company leaders so they can cross-pollinate learnings.

Having spent the past 18 months deep in my own leadership work (attending school for the past 18 months is a conversation for another day), I’ve become even more convinced that people dynamics are the highest leverage variable in cybersecurity execution. And it’s not just on the level of leadership that is being discussed widely. It’s about the differences in people and their unique styles. Again, a conversation for another day.

5. Building for MSPs Requires Being in Their Shoes

An MSP leader reminded me of a simple truth:

If you don’t understand the day-to-day realities of MSP life, you can’t build for them.

This applies to product, packaging, GTM, support, and everything in between.

6. AI: Beyond the Hype, Toward Real Value

I caught up with a friend who recently joined an AI company, and we talked about emerging approaches that leverage data inside the model and how one can connect their existing data stores to the various models. Love what they are building and I would have thought they were one of the hockey-stick companies, but it turns out, execution in a startup is hard and requires a lot of elbow greese.

The Unifying Thread

Across all these conversations, I keep coming back to one conclusion:

Security is fragmenting and converging at the same time: The biggest opportunities — for vendors, investors, and operators — are in the seams.

Ecosystems matter. Empathy matters. And clarity of execution matters more than ever.

It’s an exciting moment to be building in this industry.

September 17, 2025

On Stage in Oslo: A Conversation on Cybersecurity, Innovation, and Global Markets

Category: Investment, Security Market — @ 8:05 am

At the Summa Equity Annual Investor Meeting in Oslo, I had the privilege of joining Jacob Frandsen on stage for a conversation about the state of cybersecurity and the broader forces shaping technology companies today. The dialogue revolved around four big questions. Each one central to how investors, founders, and operators should be thinking about the future:

1. Balancing Investing in Innovation vs. Delivering Profitability

“It’s not innovation or profitability. It’s knowing when and how to balance the two engines that drive growth.”

  • Innovation as survival – At smaller scale, innovation is paramount and innovation creates the moat that ensures relevance. Without it, companies risk being commoditized.
  • Profitability as discipline – Operational excellence, sales efficiency, and cost control are non-negotiable as you scale.
  • Two-engine model – Run one engine for profitability, another to push the edge of innovation.
  • AI disruption – Both of areas of profitability and innovation are nicely coming together with AI: AI applied in any are of a company are driving profitability, time to market, etc. On the other hand, entire cyber products are being rewritten with AI at the core. Missing the AI wave on either side kills your future relevance.

2. AI and Cyber: Opportunity and Risk

“AI is both a multiplier of capability and a source of new risks. Success comes from knowing when and how to use it.”

  • Force multiplier – AI accelerates development, marketing, sales, detection engineering, and lowers barriers for non-experts.
  • AI-led attacks – Still emerging, but attackers will adopt quickly — as defenders we must keep pace.
  • Security for AI – A number of new challenges we are facing. This will likely grow into its own market, but the fundamentals (data protection, trust, governance) remain the same.

3. Defensible Positions for Emerging Cyber Companies

Especially in the light of large security platforms like Crowdstrike or Microsoft or SentinelOne, how can smaller companies and startups be relevant at all?

“In cybersecurity, defensibility isn’t just about tech.”

  • Wedge strategy – Start narrow, with an overlooked market or product gap. For example, the MSP / SMB segment is still significantly underserved but presents a vast opportunity.
  • Data gravity – Unique datasets become the backbone of long-term defensibility, especially with AI to mine the data and make it actionable.
  • Ecosystem first – Build API-driven integrations that make you indispensable within workflows, rather than standing alone. Modern security organizations that are using one of the large platforms are still using about 20 other products to fill gaps. If those products are integrated into the larger platform it greatly reduces the complexity and ease for the operators. For the security vendors it opens up the opportunity for technology partnerships on the flip side.

4. Europe vs. US: Different Playbooks

“US is about speed and boldness; Europe is about trust and staying power — the opportunity for EU business is bridging both playbooks.”

  • Speed vs. trust – US rewards rapid scaling and bold claims; Europe emphasizes trust, compliance, references, and credibility. European customers are rarely early movers on new technologies.
  • Market fragmentation – Europe is highly localized; VARs and telcos dominate, with significant regional differences in regulation and go-to-market.
  • Talent edge – Europe offers strong technical talent from world-class universities. ETH anyone? 🙂
  • Opportunity – EU players can win by leaning into local strength; US entrants will struggle to replicate that quickly in all the markets. Adapting a product to local markets with different languages, different tax codes, cultures, labor laws, data privacy laws, etc. is a lot of work. That is why you see most US companies expand into UKI first and then slowly entering some of the countries in mainland Europe.

Closing Thoughts

The conversation reinforced for me that cybersecurity doesn’t exist in a vacuum. It intersects with innovation cycles, global talent pools, regulatory environments, and the transformative force of AI. Companies that thrive will be those that balance innovation with discipline, embrace ecosystems, and play the long game across diverse markets.

I left the stage energized. Not just by the challenges, but by the opportunities for European companies to seize if we approach them with clarity and conviction.

Reflections from the Summa Equity Annual Investor Meeting at the Oslo Opera

Category: Investment — @ 7:26 am

I had the pleasure to attend the Summa Equity Annual Investor meeting today in Oslo. It was inspiring to hear about companies in the Summa portfolio that are making a real difference. Taking a step back from day-to-day cybersecurity and business conversations, it’s refreshing to dive into themes that truly matter for humanity. At the Annual Investor Meeting in Oslo, Summa’s four investment areas came into sharp focus and they highlight both the scale of the challenges and the opportunities ahead.

Four Themes Shaping the Future

Here are the four themes that Summa invests in and some interesting facts that I gathered during the presentations:

Circularity

  • Desalination as a pathway to more clean water
  • How little of our waste is recycled, despite mounting pressure on resources
  • The ongoing pollution of water, air, and soil and the need to stop it at the source

Sustainable Food

  • The world will need ~55% more calories in the near future
  • Aquaculture (fish farming) is essential if we want to feed the planet sustainably – there is not enough grass to feed the cows that we’d need to feed the world
  • 26% of global greenhouse gas emissions come from the food system

Energy Transition

  • Electricity demand is projected to double by 2050
  • Outdated grids will struggle to keep up with demand, especially from data centers
  • In Europe, electricity price volatility has surged 150% in just four years

Tech-Enabled Resilience

  • Cybercrime now costs the global economy more than $10 trillion annually
  • Resilience is not optional — from cybersecurity to supply chains, it underpins progress in every other theme

Why It Matters

These themes may sound broad, but they tie directly to the choices we make today. Food, water, energy, and digital resilience are the foundation of a thriving future. Hearing how Summa is approaching them — and backing real companies solving real problems — is both sobering and energizing.

As someone deeply engaged in cybersecurity, it’s eye-opening to connect that work to the bigger picture: resilience, sustainability, and how we ensure humanity thrives well into the future.

Thanks to Summa Equity for hosting such a thought-provoking gathering and for having me speak about cyber security.

September 4, 2025

Go-to-Market Strategies for Small Security Companies

Category: Go To Market, Uncategorized — Tags: , , , – @ 8:29 am

Bringing a new product to market is hard—especially for small companies with limited sales resources. While large players can rely on global sales teams, most startups and scale-ups need to be smarter in how they approach their go-to-market (GTM) and route-to-market (RTM) strategies.

Recently, I walked through a set of practical approaches for some of the companies that I work with as an advisor and board member. I wanted to share these lessons more broadly as they might be useful for others as well. These lessons apply broadly to any small technology firm looking to punch above its weight.

Start with Segmentation and ICP Clarity

The first step in any GTM journey is understanding who exactly you are selling to. Segment your market carefully and define your Ideal Customer Profile (ICP). A well-defined ICP keeps you focused and helps you avoid wasting precious time on prospects that aren’t a good fit.

Match the Route-to-Market to Each Segment

Different customer types buy differently. Some may prefer to purchase through a distributor, others via a managed service provider (MSP) or a systems integrator (SI). Aligning your RTM strategy with each ICP segment ensures you meet your buyers where they already are.

Distributors: Give Before You Get

One of the biggest misconceptions startups have is that distributors will automatically champion your product. In reality, distributors expect you to bring them demand first. Show them you can generate business and they’ll start paying attention.

Leverage Technical Partnerships

Forming technical partnerships with larger vendors is often one of the fastest ways to expand reach. These companies already have distribution networks, customer relationships, and market credibility. By integrating or aligning with them, you can ride their coattails into places you couldn’t reach alone.

Ask Your Customers How They Buy

Your existing customers are one of your best sources of intelligence. Ask them:

  • Do they prefer working with MSPs? Who do you work with yourself?
  • Do they use certain distributors?
  • Do they have go-to SIs or VARs?

Not only will you learn more about your market’s buying habits, but customers can often introduce you directly to their providers to short-circuiting months of cold outreach.

Regional VARs: An Untapped Opportunity

Large VARs are tempting, but it’s hard to get their attention. Smaller, regional VARs are usually more receptive, hungry for growth, and open to building mutually beneficial offers. For many startups, these local relationships turn out to be far more productive.

Don’t Rely on Cold Calling Alone

While direct sales will always play some role, scaling purely through cold outreach is rarely sustainable for startups. Partnerships, integrations, and channel leverage amplify your reach, making each sales dollar work harder.


Closing Thoughts

For small companies, success isn’t about brute-forcing your way into the market. It’s about smart leverage. By segmenting effectively, aligning routes-to-market, and building the right partnerships, startups can create multiplier effects that would be impossible through direct selling alone.

The road to market is rarely straight, but with the right GTM strategy, even small players can carve out a strong position in highly competitive industries like cybersecurity.

August 27, 2025

Security Chat 6.0: A Night of Ideas, Innovation, and Community in Zurich

Category: Community — Tags: , , – @ 6:43 am

Yesterday, we brought Security Chat back to Zurich for its sixth edition and it was everything I had hoped for: brilliant talks, a packed room, and the joy of reconnecting with friends old and new. What started back in 2012 as an informal gathering of security enthusiasts has grown into a tradition where community and ideas come together.

This year we had five lightning talks. Each one very different in style, but all equally thought-provoking:


Candid Wüest – Why AI-Powered Malware Won’t Kill You (Yet)

Candid cut through the hype around “AI-driven malware.” He explained the difference between AI-generated malware (just code produced by LLMs) and AI-powered malware (where AI runs inside the malicious code). While there are proof-of-concepts in the wild, protection stacks still hold up. Behavior-based detection and layered defenses remain effective. His takeaway: AI will eventually give attackers new tools, but defenders are not out of the game.


Joshua Rawles – The Global Impact of a Modern Phishing-as-a-Service Operation

Josh gave us an inside look at the booming phishing-as-a-service industry. For as little as $50 a month, criminals can buy turnkey kits that bypass MFA, come with 24/7 “support,” and scale to tens of thousands of victims. His case study on Storm-1167 (“FluorStorm”) showed just how industrialized this has become, with thousands of domains, Telegram bots for real-time stolen credentials, and devastating impact on nonprofits. His message: MFA is necessary but not sufficient; phishing-resistant authentication and faster takedowns are critical.


Barbara Dravec – Drawn to Encrypt: A Visual Trail from OTP to RSA

Barbara brought cryptography to life with a visual storytelling approach. Mapping concepts like one-time pads, pseudo-random generators, and RSA to vivid imagery from the natural world (snakes, owls, octopuses, and more). It was a refreshing, creative reminder that explaining security to non-experts requires more than equations. It sometimes requires narratives that people can connect to.


Advije Rizvani – AI on Wall Street: Smart, Fast… and Surprisingly Fragile

Advije, a PhD student in Liechtenstein, showed how machine learning systems that drive algorithmic trading can be tricked with subtle, temporary data manipulations. A single manipulated data point can cause wrong trades, eroding portfolio performance over time. Her research raises a sobering question: in high-stakes financial markets, how do we know whether losses are due to bad luck, bad models… or deliberate attacks?


Elliott – When Cookies Collide: The Overlooked Attack Vector

Elliott closed the night with a deep dive into cookie tossing, a little-known but powerful web attack. By controlling a subdomain, an attacker can “toss” malicious cookies that hijack authentication flows or manipulate transactions on the parent domain. He walked us through real-world cases and defenses and highlighting how a small misconfiguration can open the door to session hijacking and data theft.


More Than Talks—It’s About Community

What I loved most about Security Chat 6.0 wasn’t just the talks, but the variety of voices and the energy in the room. We had people flying in from London, driving hours through traffic, and carving out time to share ideas. We had job seekers and companies hiring. We had old friends, new connections, and plenty of wine and bagel bites to keep conversations flowing.

A big thank you to our sponsor 1Password for supporting the evening, to the speakers for sharing their insights, and to everyone who showed up to make this community vibrant.

As I said on stage: cybersecurity has given me so much over the years. Events like this are my way of giving back by fostering connection, sparking ideas, and reminding us all that innovation doesn’t happen in isolation.

See you at the next Security Chat – whenever and wherever it may be.

August 14, 2025

Mastering the Channel Ecosystem — Lessons From our BlackHat Panel

Category: Go To Market — @ 7:04 pm

Thanks to everyone who joined the panel at the BlackHat Innovators & Investors Summit — it was a fast, practical session and full of real, repeatable advice. Below I’ve distilled the conversation into the speakers and the most actionable takeaways founders, investors and channel leaders can use.

Who Spoke

  • Daniel “DB” Bernard — Chief Business Officer, CrowdStrike
  • Matt Berry — Global Field CTO, Cyber, World Wide Technology (WWT)
  • Chris Bisnett — Co-founder & CTO, Huntress
  • Peter Bryant — Market Analyst, Canalys
  • Moderator: Raffael Marty, Operating Advisor

Top-line Thesis

Great product is necessary but not sufficient. If you want scale and durability you must design product, GTM, pricing and operations for the channel — MSPs, VARs, MSSPs, distributors and hyperscaler marketplaces. Get those pieces aligned and the channel becomes your growth engine and a moat.

The Most Important, Actionable Insights

1) Start with real customer evidence — then bring partners in

  • Close a first few deals directly and then ask: Who do you buy through? If the customer uses a reseller or integrator, bring that partner into the next conversation.
  • A partner introduced by a customer is infinitely more effective than cold outreach.

2) Target, pilot, then scale (regional first)

  • Don’t boil the ocean. Pick a geography or vertical where a partner has influence, run an enablement-intensive pilot, close a few joint deals, and let the wins spread organically through the partner organization.
  • Grassroots wins (regional proof points) are how startup products get noticed inside large SIs and disti sales orgs.

3) Engineer the product for MSPs and scale

  • Some technical must-haves for MSPs: multi-tenancy, frictionless provisioning, usage-based billing, robust reporting, and minimal support overhead (no reboots, simple deployment).
  • Build integrations with RMM/PSA tools. Partners won’t adopt tools that don’t fit their stack.

4) Use hyperscaler marketplaces as a growth hack

  • AWS/Azure/Google marketplaces are a procurement shortcut — customers can spend cloud credits and close without long vendor approvals. CrowdStrike and others proved this: marketplace adoption accelerated scale dramatically.
  • Prioritize marketplace readiness early (billing, security/compliance, packaging).

5) Think of channel margin as external sales / commission

  • Yes, margins look worse on paper — but compare to the true CAC of building a direct sales force. That margin buys you reach and reduces acquisition risk (you only pay when a partner sells).
  • Measure partner-sourced vs partner-influenced revenue and the CAC of each.

6) Don’t assume distis/VARs will sell without support

  • Listing in a distributor catalog is not the finish line. You must: enable, co-market, provide lead flow, run joint sales plays, and sometimes front-end incentives to get sellers focused on your SKU.
  • Short-term investment in enablement and marketing is how you get long-term pull-through.

7) Build partner economics and enablement as products

  • Provide free (or low-cost) certification, sales playbooks, demo environments, one-click onboarding, and co-branded assets. These reduce time-to-first-deal and lower partner friction.
  • Consider usage-based billing to match MSP economics: partners want to align cost with consumed endpoints/services.

8) Decide and double-down on one partner type first

  • MSP vs MSSP vs VAR vs SI: each requires a different product shape and GTM. Nail one, then expand. Trying to serve all at once dilutes focus and kills momentum.

9) Invest in partner success and low-touch CSM automation

  • With thousands of SMB endpoints, you can’t scale human CSM for every account. Automate onboarding, monitoring, renewal nudges and migration tools — make it easy for MSPs to manage many customers.

10) Metrics you should be tracking from day 1

  • Time-to-first-deal with partner (by partner type)
  • Partner-sourced pipeline and partner-influenced revenue
  • Onboarding time per MSP customer (time-to-live)
  • Churn by partner / churn during partner transitions
  • Net retention for partner-sourced customers

Practical checklist for founders (do this tomorrow)

  1. Pull your top 3 customers and ask: who did you buy through?
  2. Pick one partner (regional or niche) and design a 90-day pilot with joint enablement and a measurable close objective.
  3. Audit product integration: do you have PSA/RMM connectors? If not, roadmap one.
  4. Prepare an AWS/Azure/Google marketplace package (billing, security, description, packaging).
  5. Create a partner enablement kit: demo script, short playbook, 1-page technical install guide, and a free certification.
  6. Model partner economics as commission vs. CAC — present it to your board/investors as external sales.
  7. Instrument partner metrics in your analytics and report them weekly.

Suggested questions to ask a distributor / VAR / SI when exploring partnership

  • Who in your organization will sell and who will implement our solution? (names/roles)
  • What does success look like in the first 90 days? How many joint opportunities will you target?
  • Which 3 vendors do you co-sell with today (and how do we integrate with them)?
  • What enablement will you need from us (sales motion, demo environment, pricing, rebates)?
  • How will leads/credit/margin be handled if a customer comes direct?

For investors: what to look for in a channel-first startup

  • Product designed for the channel: multi-tenancy, RMM/PSA integrations, usage billing.
  • Early partner proofs: paying partners or partner-introduced deals, not just distributor listings.
  • A go-to-market playbook for partner enablement (documented processes, enablement kits, measurable time-to-first-deal).
  • Marketplace strategy and early traction (even if small, momentum matters).

Closing takeaways (what I heard loud and clear)

  • The channel is not a shortcut — it’s a discipline. If you commit, build for it, and invest in the partner motion, channel-first companies scale faster and with lower long-term CAC.
  • Start with customers, pilot locally with partners, engineer for MSP realities, and use marketplaces to accelerate procurement.
  • Win through repeatable partner plays and measurable enablement — wins scale inside partner organizations.

Thanks again to BlackHat for having us and to the panelists to take time out of their busy schedules to impart these very actionable insights.