Why we invest in AI for cybersecurity
From prediction and prevention to authentication and autonomy, AI is rebuilding the cybersecurity landscape.
AI has become both cybersecurity’s biggest threat and its best defense. Attackers now use it to automate phishing, generate malicious code, and create deepfakes that can fool even advanced systems. Defenders are adopting the same technology to predict breaches, trace anomalies, and verify authenticity, but offensive methods still have a decisive advantage.
This tension defines the next wave of opportunity. Every advance in generative models expands the attack surface but also increases the need for smarter defenses. At the same time, it opens the door for new ventures building predictive, adaptive, and trusted systems.
In this article, I’ll explore where those opportunities lie and how founders can turn AI’s volatility into an advantage. I’ll also share insights from the recent “Deepfakes, Scams and Cybercrime” panel at the Merantix Campus in Berlin, where engineers, researchers, and founders discussed how these threats are evolving in real time, along with what that means for the companies racing to stop them.
AI’s Opportunity for Startups
AI is changing cybersecurity from a reactive discipline into a predictive one. It enables systems that model likely attack paths, anticipate exploits, and block them before they happen. It compresses the skills gap by embedding expert capability into software and brings enterprise-grade protection within reach of smaller firms. It is also unlocking new frontiers, from content authentication and fraud detection to the protection of autonomous systems. Despite the start of this defensive transformation, the scale of the challenge is enormous.
A World Economic Forum report found that 42 percent of organizations were successfully targeted by social engineering attacks in 2024. Nearly half of AI-generated code contains vulnerabilities, exposing products before they even launch. Model-serving APIs and large language model endpoints have become key targets, yet security tools for them remain rudimentary. Deepfake detectors perform well in controlled environments but fail in the wild. Together, these signals point to the same truth: attack surfaces are expanding faster than legacy vendors can adapt.
Even the titans of tech are feeling the squeeze. During the discussion at the Deepfakes and Cybercrime event, a former YouTube security lead in the audience described the pace of this dynamic: any detection system could be bypassed within six weeks. Their team measured success not by perfection but by halving damage with every iteration. That cadence should be a design goal for founders: ship, measure, retrain, repeat.
For founders, this widening gap represents both a technical and commercial opportunity. Security architecture built for static networks cannot protect self-learning systems. The next generation of cybersecurity companies will build for constant motion, predicting, learning, and neutralizing threats as they emerge. In that shift lies the foundation for an entirely new class of defensible: AI-native security ventures.
Where Founders Can Build
Each layer of the tech stack, from prediction and prevention to authentication and autonomy, is being rebuilt for a world where threats evolve faster than any human can track. The founders moving first in this shift are already defining how the next generation of security will work.
Predictive defense sits at the center of this change. Most existing tools still react after a breach, generating alerts once the damage is done. AI enables a different approach: systems that anticipate exploits, simulate attacks, and disrupt them in advance. BforeAI, based in France, is an example. Its predictive threat intelligence platform analyses billions of data points to identify malicious domains before they go live, serving clients that include Fortune 500 firms and NATO.
Revel8, from our own portfolio, uses AI to quantify exposure and simulate potential financial impact. The goal here is time compression. If attackers iterate in weeks, platforms need detection, simulation, and policy updates measured in days. Julius Muth, Co-Founder of Revel8, pointed to recent call-center breaches at large European insurers as examples of how scams now exploit workflows as much as systems, which makes predictive controls more valuable than reactive alerts. But not all companies enjoy the luxury of a robust security team.
Smaller companies are disproportionately exposed to cyber risks: one in three was attacked last year, with average breach costs of $255,000. Nonetheless, fewer than a third employ dedicated security staff. Larger companies typically employ a Chief Information Security Officer (CISO), along with dedicated security teams, but this decades-old security gap between SMEs and enterprises could soon come to an end. AI now makes it possible to embed those functions into SaaS platforms.
Cynomi has harnessed this model with an AI-CISO platform that automates assessments, generates policies, and provides compliance dashboards. In 2024, it signed deals with more than 100 service providers, including Deutsche Telekom, distributing its platform to thousands of SMBs, showing that these services serve a real need and a huge addressable market. These platforms should embed opinionated process controls, guardrails that automatically enforce secure behaviour inside workflows, such as requiring multi-party approval for payments or verifying caller identity through a trusted channel.
Trust infrastructure represents another frontier. Cheap deepfakes and even “deepfake-as-a-service” make detection alone a losing strategy, something that Hasna Najmi, Senior AI Solutions Architect at Merantix Momentum, pointed out during our panel. Najmi said that these services are so efficient that turnkey providers can now onboard users via chat and generate realistic videos in minutes. Detection also remains probabilistic. Muth explained that CISOs must still define thresholds and decision rules for false positives and negatives, while Dr. Nicolas Müller, Senior Engineer at the Fraunhofer Institute, noted that audio deepfake detectors fail to generalize to new model families. Müller added that if generative AI consolidates around a few providers, watermarking and training on real outputs could become practical, improving provenance and verification at scale.
Bigger threats, bigger opportunities
AI is both the nemesis and knight in shining armour for cybersecurity. But it’s also more complicated than that: AI adoption itself is also creating risks deeper in the infrastructure layer. Coding assistants now write large portions of production code, often with hidden vulnerabilities that pass untested into live systems. APIs are multiplying, model-serving endpoints are exposed, and the infrastructure that supports AI models has become a new target. The OWASP 2025 LLM Top 10 and NIST’s Generative AI Profile both highlight the same weak points: injection flaws, insecure outputs, and supply-chain risks linked to AI-generated code.
The consequences are already visible. Ray, an open-source framework designed to scale AI and Python applications, experienced a hack in which its clusters were hijacked for crypto-mining in 2024, and NVIDIA’s Triton inference server required an emergency patch in 2025 after attackers found remote-execution flaws. These problems continue to exist at scale, but a few European startups are beginning to respond. HarfangLab has grown quickly with its AI-powered endpoint detection platform, now surpassing €40 million in annual revenue, while Lupovis, spun out of the University of Strathclyde, uses deception-based AI honeypots to lure and trap attackers.
For founders, the opportunity runs deeper than patches or point solutions. The market still lacks foundational tools that secure AI code and infrastructure from the inside out: firewalls, continuous validation systems, and automated runtime protection built for a world where models write and deploy themselves.
Autonomy and counter-drone security are becoming the new front line of cybersecurity. As the boundary between digital and physical systems dissolves, attacks on drones, satellites, and autonomous vehicles can now trigger real-world consequences. The war in Ukraine exposed how quickly off-the-shelf drones, powered by commercial AI models, can be repurposed into weapons or surveillance tools. The UK NCSC has warned that AI will shorten exploit timelines for critical infrastructure, while NATO and EASA have elevated counter-drone initiatives as strategic priorities.
This shift turns autonomy itself into an attack surface. AI systems that once operated in isolation—navigation, perception, targeting—are increasingly networked and therefore vulnerable to interference, spoofing, and data poisoning. Keeping these systems secure demands continuous model updates, telemetry, and field learning. Europe is beginning to respond. Helsing, Quantum Systems, Dedrone and Origin Robotics represent the first generation of companies building or defending AI-powered autonomy. Their rise signals both the urgency of the challenge and the gap still left to fill. For founders, the opportunity lies in securing this frontier: embedding cybersecurity into the fabric of autonomous systems before adversaries do.
Why This Matters for Europe
Europe’s growing focus on digital sovereignty is reshaping the cybersecurity market. The AI Act sets new standards for trustworthy and transparent AI, while the NIS2 Directive raises security requirements for critical sectors. Together, they are tightening compliance expectations across the continent and creating strong demand for trusted infrastructure. Nearly half of European organizations plan to adopt sovereign cloud solutions by 2025, yet the market remains dominated by a few non-European providers. This imbalance leaves governments and enterprises exposed to both cyber threats and external jurisdiction through laws such as the U.S. Cloud Act.
Momentum is now shifting toward self-reliance. Initiatives like Gaia-X and proposals for a European Tech Sovereignty Fund reflect how sovereignty has become both an economic priority and an investment theme. Enterprises and critical-infrastructure operators are seeking EU-governed systems that guarantee control over data and operations. For founders, this environment provides a natural tailwind. Every new rule, funding program, and procurement policy strengthens the case for European-built, AI-native security platforms.
The same dynamic is playing out among smaller companies. Regulation is pushing for local control, while AI is making it possible to deliver advanced security capabilities at scale. Automated platforms can now offer CISO-level oversight at a fraction of the cost, giving SMEs access to protection once reserved for large enterprises. As these tools spread through supply chains and business networks, they will define how Europe achieves digital autonomy and long-term resilience.
Build the future of cybersecurity with us
The cybersecurity arms race is already underway, and the winners will not only protect Europe but also define new global categories. The challenge is immense, but so is the upside. Founders who build sovereign, AI-native platforms that predict and neutralize threats, democratize security leadership, and anchor trust infrastructure will create the next generation of cybersecurity leaders.
In our venture studio, we partner with founders (like revel8) who are ready to take on this challenge. If you are thinking about building in this space and want to shape the future of AI-first security, get in touch with us.