
A 2026 Security Industry Meta-Analysis
By Victor Sowers
The short version: 30 security reports published between November 2025 and April 2026 reveal a single structural pattern. The industry has scaled detection to industrial capacity (865K alerts/year, 141M findings, 132 CVEs/day) and left remediation at artisanal capacity (252-day MTTR, 82% security debt, 38% AI vuln fix rate). The creation-remediation gap is the defining AppSec problem of 2026.
Every year, the security industry publishes dozens of reports. Each tells a version of the story:
Verizon covers breaches.
CrowdStrike covers adversaries.
Veracode covers code.
Mandiant covers incident response.
IANS covers the people running security programs.
The problem is that no single report tells the whole story, and every publisher has incentive to emphasize findings that support their product category.
So we read all of them. Thirty reports from thirty different publishers, released between November 2025 and April 2026. Nine flagship threat reports, including the Verizon DBIR (22,052 incidents, 12,195 confirmed breaches), Mandiant M-Trends (500,000+ hours of incident response), and CrowdStrike's Global Threat Report (281+ adversary groups tracked). Nine AppSec and DevSecOps reports, including Veracode's analysis of 1.6 million applications generating 141.3 million raw findings. Seven AI code security reports, including the Endor Labs/Carnegie Mellon benchmark testing 200 real-world tasks across 77 CWE classes. Five vulnerability and supply chain reports rounding out the dataset.
We extracted 155+ quantitative data points and organized them into seven thematic clusters. Where reports agreed, we treated the convergence as signal. Where they disagreed, we documented both figures and reconciled the difference.
Vendor bias doesn't disappear when you read more reports. But consistent patterns across competing vendors are more likely to reflect ground truth than any single publisher's narrative.
The reports span the full security lifecycle. On the attack side: how fast adversaries move, what vectors they use, how they monetize access. On the defense side: how much debt organizations carry, how many alerts they ignore, how fast they patch. On the emerging front: how AI-generated code introduces new vulnerability classes, how supply chain attacks are evolving beyond malicious packages, and how the people responsible for security are losing confidence in their ability to keep up.
Here's what the data actually says.
82% of organizations carry security debt unfixed for more than a year. Source: Veracode State of Software Security 2026 (n=1.6M applications).
Every organization we know has a vulnerability scanner. Almost none have a vulnerability fixer.
"82% of organizations carry security debt — flaws unfixed for more than a year. Up 11% year over year." — Veracode State of Software Security 2026 (n=1.6M applications)
The detection machine is working. Veracode's State of Software Security 2026 processed 141.3 million findings: 115.6 million from SAST, 22.1 million from SCA, and 3.6 million from DAST. The scanning infrastructure generates findings at industrial scale. Sixty percent of organizations carry critical security debt, up 20% relative to the prior year.
But findings don't equal fixes. High-risk vulnerabilities spiked 36% year over year in Veracode's dataset. The inbound rate is accelerating. The outbound rate, the rate at which teams actually close findings, is not.
Other reports converge on the same pattern from different angles:
76% discover compliance issues after deployment, not during development — GitLab's DevSecOps survey of 3,266 professionals. The costliest possible moment to find a flaw is after it's running in production.
78% are running critical vulnerabilities in production right now — Orca Security's State of AppSec 2026. Not in their backlog. In production. Serving traffic.
46% of perimeter device vulnerabilities remained unresolved — Verizon DBIR 2026.
82% suffered a container-related breach; 78% failed a compliance audit due to container CVEs; 90% still using lightly modified public container images — ActiveState's Container Security report (n=250 DevSecOps leaders).
38% fix rate for high-risk AI vulnerabilities — the worst of any asset category — Cobalt's State of Pentesting 2026 (n=16,500+ pentests). The newest attack surface has the weakest remediation capacity.
13% of organizations are fully automated in supply chain security; 17% consistently sign their SBOMs — DigiCert's supply chain security report.
The services running on end-of-life language versions compound the problem. Datadog found that services on EOL language versions have a 50% exploitable vulnerability rate, compared to 31% for supported versions. Half of organizations adopt new library versions within 24 hours, suggesting the update infrastructure exists. The bottleneck isn't the ability to upgrade. It's the confidence that an upgrade won't break something, and the capacity to validate that it won't.
"95% of DevSecOps leaders expect intelligent remediation to become standard practice." — ActiveState State of Container Security 2026
The direction of the market is clear. The gap between where organizations are (13% automated) and where they expect to be (95% expecting automation as standard) defines the market opportunity for the next three years. The question isn't whether automated remediation will become standard. The question is how many breaches happen between now and when it does.
87% of AI-generated agent code contains at least one vulnerability; the best agent+model combo produces secure code only 7.8% of the time. Source: Endor Labs/Carnegie Mellon Agent Security League 2026 (n=200 tasks across 77 CWE classes).
"The best AI agent + model combination produces secure code just 7.8% of the time. The code works. It passes tests. It's exploitable." — Endor Labs / Carnegie Mellon Agent Security League 2026
Forty-two percent of committed code is now AI-generated or assisted, according to SonarSource's developer survey of 1,100+ enterprise developers. Eighty-seven percent of that code contains at least one vulnerability.
That second number comes from the Endor Labs/Carnegie Mellon Agent Security League benchmark, which tested AI coding agents on 200 real-world tasks from 108 open-source projects across 77 CWE classes. The best-performing agent achieved 84.4% functional correctness but only 17.3% security correctness. The best combined result, Cursor paired with Claude Opus 4.6, produced secure code just 7.8% of the time.
The verification gap compounds the problem. SonarSource found that 96% of developers don't fully trust AI output, but only 48% always verify before committing. That means 52% of developers sometimes commit unverified AI-generated code to production. Thirty-five percent access AI tools via personal accounts, creating shadow AI environments where organizations have zero visibility into what code is AI-generated.
The vulnerability rates from independent studies converge on the same conclusion:
100% of companies have AI-generated code in their codebase; 81% of security teams lack visibility into which code is AI-generated — Cycode (n=400+ CISOs).
AI generates XSS-vulnerable code 86% of the time and log injection vulnerabilities 88% of the time — Cycode.
AI code introduces 15-18% more vulnerabilities than human-written code; AI-generated PRs wait 4.6× longer for review — Opsera (n=250,000+ developers).
32% of AI/LLM pentest findings are rated high risk — 2.7× the 12% baseline for other asset types — Cobalt's State of Pentesting.
Prompt injection accounts for 37.6% of all AI pentest findings — Cobalt.
The academic evidence is accelerating. Georgia Tech's Vibe Security Radar tracks CVEs attributable to AI-generated code through git history analysis:
January 2026: 6 confirmed AI-attributable CVEs.
February 2026: 15.
March 2026: 35.
Georgia Tech estimates the true count is 5-10× higher than detected, putting it at 400-700 AI-attributable vulnerabilities — and growing exponentially.
The productivity asymmetry deepens the challenge. Opsera found that senior developers get five times the productivity benefit from AI tools compared to juniors. The developers who gain the most from AI are the ones best equipped to catch security issues. The developers who gain the least are the ones most likely to produce insecure AI code. AI coding tools amplify existing skill gaps rather than closing them.
The secrets dimension makes it worse. GitGuardian's Secrets Sprawl 2026 documented 28.65 million new hardcoded secrets on public GitHub, a 34% increase year over year. Claude Code co-authored commits leak secrets at roughly twice the baseline rate. AI-service credential leaks surged 81% to 1.27 million, and 24,008 secrets were found exposed in MCP configuration files — a new attack surface that didn't exist 18 months ago.
One in five organizations has already experienced an LLM security incident, and 61% are calling for a "strategic pause" on AI adoption. But pausing isn't realistic when 97% of organizations are already using or piloting AI coding assistants and 42% of committed code is already AI-generated. The security problem is already in production.
30% of all breaches now involve third-party components — doubled YoY. The Shai-Hulud worm autonomously infected 1,000+ npm packages and exposed ~25,000 GitHub repos. Source: Verizon DBIR 2026 (n=22,052 incidents) + ReversingLabs SSCS 2026.
In early 2026, a worm named Shai-Hulud infected over 1,000 npm packages autonomously. It didn't just add malicious packages to the registry. It compromised existing legitimate packages and spread on its own, exposing approximately 25,000 GitHub repositories. ReversingLabs documented this in their fourth annual supply chain security report as the first registry-native worm malware — a new class of supply chain attack.
Third-party risk is no longer a sub-category of security risk.
30% of all breaches now involve third-party components — doubled from ~15% the prior year — Verizon DBIR 2026.
66% of the most dangerous, longest-lived vulnerabilities originate from third-party and open-source code — Veracode.
73% increase in malicious open-source package detections overall — ReversingLabs.
The ecosystem-level picture is more nuanced. PyPI malware detections dropped 43% and NuGet dropped 60%, suggesting that platform-specific security investments are working. Attackers are shifting to less-defended ecosystems: npm, AI model repositories, and agentic AI skill marketplaces.
The AI supply chain is the newest attack surface. Three campaigns documented across this dataset:
OpenClaw / ClawHavoc — 1,100+ malicious AI agent skills uploaded to ClawHub. Source: IBM X-Force.
NullifAI — attacks on Hugging Face model repositories. Source: ReversingLabs.
General malware sprawl — 192,742 malware packages detected in 2025, 4× more than CVEs. Source: Mondoo.
The majority of malicious code in package registries is invisible to traditional vulnerability scanners.
The regulatory dimension is catching up. DigiCert found that only 12% of organizations are fully prepared for incoming supply chain regulatory requirements, and 55% have extensive preparation remaining. Only 17% consistently sign their SBOMs. The gap between regulatory expectations and operational readiness means supply chain incidents will increasingly carry compliance consequences on top of the security impact.
What makes this moment different from previous supply chain scares is the convergence. Package registries are being targeted by worms (Shai-Hulud). AI model repositories are being targeted by poisoning campaigns (NullifAI on Hugging Face). AI agent skill marketplaces are being targeted by malicious uploads (ClawHavoc on ClawHub). The supply chain attack surface is expanding into every new platform where code or models are shared.
The supply chain isn't a risk category. It's the risk category.
865,398 average alerts per organization per year (up 52% YoY); only 18% of "critical" findings remain critical after runtime context applied. Source: OX Security AppSec Benchmark 2026 (n=216M findings, 250 organizations) + Datadog State of DevSecOps 2026.
"Only 18% of vulnerabilities labeled 'critical' remain critical after runtime context is applied. 82% of critical alerts are effectively false positives." — Datadog State of DevSecOps 2026
The average organization generates 865,398 security alerts per year — up 52% in twelve months. Critical findings nearly quadrupled from 202 to 795 per organization. These figures come from OX Security's AppSec Benchmark, which analyzed 216 million+ findings from 250 organizations.
Tool sprawl amplifies the problem. GitLab's DevSecOps survey of 3,266 professionals found:
60% use five or more development tools.
49% use five or more AI tools.
Seven hours per week per team member lost to inefficiency from tool-switching, duplicate findings, and conflicting priorities.
Cycode found that 97% of organizations plan to consolidate their AppSec stack within a year — a near-universal signal that the current multi-tool model is failing.
The math is straightforward. AI coding tools generate more code. More code creates more findings. More findings hit the same number of security engineers. IANS Research found that 89% of security teams are stretched thin or understaffed. OX Security identified AI-assisted development as the primary driver of the 52% alert increase. The input volume is accelerating. The processing capacity is flat.
When virtually every enterprise plans the same architectural change, that's not a preference. That's a verdict on the current model.
Triage isn't optional. It's the bottleneck.
29-minute average breakout time in eCrime intrusions (65% faster than 2024); median time-to-exploit dropped from 63 days to 5 days. Source: CrowdStrike Global Threat Report 2026 + Mondoo State of Vulnerabilities 2026.
"Breakout time: 29 minutes. The fastest recorded breakout was 27 seconds. 65% faster than 2024." — CrowdStrike Global Threat Report 2026
That's the average time from initial access to lateral movement in eCrime intrusions, according to CrowdStrike's Global Threat Report.
The speed cascade across the kill chain is documented by multiple independent sources:
Attacker hand-off time: 22 seconds (down from 8 hours in 2022) — Mandiant M-Trends 2026.
Fastest exfiltration: 72 minutes from initial access (down from 285 minutes — 4× faster YoY) — Unit 42 (n=750+ engagements).
Median time from disclosure to active exploitation: 5 days (down from 63) — Mondoo.
For critical edge device vulnerabilities: zero-day median between disclosure and mass exploitation — Verizon DBIR.
42% of vulnerabilities are exploited before being publicly disclosed at all — CrowdStrike.
If your patch cycle is monthly, you're running with a minimum 25-day exposure window on every disclosed vulnerability.
Vulnerability exploitation has been the number one initial access vector for six consecutive years, accounting for 32% of compromises (Mandiant). The Verizon DBIR saw a 34% increase in exploitation as an initial access vector year over year. This isn't a new trend. It's a sustained, accelerating one that defenders have failed to reverse despite six years of data showing the same pattern.
The volume problem compounds the speed problem:
~59,427 CVEs projected for 2026 — first year to exceed 50,000. Source: FIRST's Vulnerability Forecast.
48,175 CVEs in 2025 — 132 per day — Mondoo.
3.8 trillion exploitation attempts blocked; 36,000 automated reconnaissance scans per second — Fortinet.
AI is scaling the operation. CrowdStrike reported an 89% increase in AI-enabled adversary operations year over year. Microsoft's Digital Defense Report documented an 87% increase in cloud destructive campaigns. Cloud infrastructure, where most organizations now run their applications, is seeing the steepest acceleration in attacks.
Ransomware remains persistent — appearing in 44% of DBIR breaches, with Sophos observing 51 distinct ransomware brands (24 of which were new). Unit 42 found that 87% of incidents now involve multi-surface attacks — meaning attackers don't stop at one vector. They chain exploits across code, identity, cloud, and supply chain simultaneously.
The combined picture is an industrial operation. Defenders running manual processes are competing against manufacturing-grade adversary infrastructure.
89% of security teams are stretched thin or understaffed; 70% of CISOs are open to a career move in 2026. Source: IANS State of the CISO 2026 (n=662 CISOs).
"70% of CISOs are open to a career move in 2026." — IANS State of the CISO 2026 (n=662 CISOs)
IANS Research surveyed 662 CISOs and found:
89% of security teams are stretched thin or understaffed.
52% say their scope is no longer fully manageable.
70% are open to a career move in 2026.
But here's the thing. CISOs have never had more organizational authority. 47% now hold executive-level titles in large enterprises (up from 33% in 2023). 36% report outside IT entirely — to the CEO, COO, General Counsel, or CRO.
The title elevated. The resources didn't follow.
Cobalt's survey of 450 security leaders measured the psychological cost: security team confidence dropped 13 points in a single year, from 64% to 51%. The people responsible for protecting organizations are below the halfway mark in confidence for the first time on record.
The AI dimension adds a new layer. Arkose Labs surveyed 300 enterprise leaders and found:
97% expect a material AI-agent security incident within 12 months. 49% expect it within six.
Only 6% of security budget is allocated to AI agent risk.
47% of organizations have delayed production releases specifically due to AI API security concerns — Salt Security.
The identity dimension makes the pressure worse. Unit 42 found that 89% of their investigations involved identity weakness as a contributing factor, with 65% of initial access being identity-driven.
CISOs are accountable for a security surface that now spans code, cloud, identity, AI agents, and supply chain. The scope expanded. The budget didn't follow. The staff didn't follow. And 70% are open to walking away.
For the organizations those CISOs leave behind, the institutional knowledge goes with them. The security program's effectiveness becomes dependent on whoever comes next — starting from scratch with the same understaffed team and the same overwhelming backlog.
Endor Labs reports 87% AI vulnerability rate; Cycode reports 62%; Opsera reports a 15-18% increment over human code. All three are correct — they measure different things. Source: Section 8 reconciliation across Endor Labs, Cycode, and Opsera 2026 datasets.
One report says AI hasn't changed attacker behavior. Another says AI-enabled attacks are up 89%. Both are right.
Contradiction 1 — AI's impact on attackers. CrowdStrike measures volume (89% more adversary operations used AI tools). Sophos measures technique (their incident response cases showed no fundamentally new attack methods attributable to AI). GenAI made phishing faster and more polished, but didn't introduce novel exploitation techniques. The distinction matters: AI is scaling existing attacks, not creating new categories of attacks. Yet.
Contradiction 2 — Dwell time. Mandiant reports 14 days globally (up from 11). Sophos reports 3 days. The difference is sample composition. Mandiant handles complex APT and espionage engagements where attackers maintain persistence longer. Sophos handles more commodity ransomware where the attacker's goal is speed to encryption, not stealth.
Contradiction 3 — AI vulnerability rates. This is where the spread is widest:
Endor Labs: 87% of AI agent code has at least one vulnerability (full agentic workflows — most complex scenario).
Cycode: 62% from the latest LLMs (standalone code generation).
Opsera: 15-18% more than human code (incremental difference vs. human baseline).
All three confirm the same direction: AI code is measurably less secure.
Contradiction 4 — Open-source ecosystem trajectory. ReversingLabs shows 73% overall increase in malicious open-source packages, but PyPI malware dropped 43% and NuGet dropped 60%. Platform-level security investments work. The aggregate gets worse because attackers shift to less-defended ecosystems. The implication: security investment in specific platforms yields measurable results, even as the overall threat picture deteriorates.
Contradiction 5 — Ransomware prevalence. Verizon DBIR reports ransomware in 44% of breaches. Mandiant M-Trends says only 13% of their investigations. The gap is dataset composition. The DBIR aggregates broadly across many contributors with a sample skewed toward commodity incidents. Mandiant handles high-end incident response where ransomware may be delegated to separate teams or insurers. Both are accurate for their populations.
Contradiction 6 — Microsoft vulnerability "improvement." BeyondTrust's Microsoft Vulnerability Report reveals a trap. Total Microsoft vulnerabilities dropped 6% — sounds like progress. But:
Critical vulnerabilities doubled.
Azure and Dynamics 365 criticals went up 9×.
Office vulnerabilities tripled to 157, with critical bugs increasing 10×.
Organizations tracking headline vulnerability counts would see improvement. Organizations tracking exploitable severity would see an acceleration.
These contradictions aren't weaknesses in the data. They're the most analytically valuable findings in this entire analysis. Single-report narratives flatten complexity. Thirty reports reveal it.
The creation-remediation gap is the structural pattern across all 30 reports: detection capacity is industrial, remediation capacity is artisanal, and the gap widens every year. Source: synthesis across all 30 reports cited above.
Thirty reports. Seven clusters. One consistent pattern: the security industry has scaled detection to industrial capacity and left remediation at artisanal capacity. Every metric that measures finding vulnerabilities is going up. Every metric that measures fixing them is flat or declining. This isn't a tool gap. It's an architectural mismatch.
Three structural conclusions emerge from the data.
The creation-remediation gap is permanent. This is not a temporary imbalance that will correct as tools mature. AI code generation is accelerating vulnerability creation while security team headcount is flat or declining. The gap will widen every year unless the architecture of defense changes.
The industry must shift from find-and-prioritize to triage-and-fix. Detection-only models cannot close a gap where time-to-exploit is 5 days and mean time to remediate is 252. Prioritization alone doesn't reduce the backlog; it just sorts it. The 95% of DevSecOps leaders who expect automated remediation to become standard aren't expressing a preference. They're recognizing a mathematical necessity.
Three predictions, grounded in the data:
AI-attributable CVEs will exceed human-attributable CVEs for certain vulnerability classes by end of 2027. Particularly XSS and injection flaws. Georgia Tech's 6→15→35 monthly trajectory and Cycode's 86-88% vulnerability generation rates make this directionally inevitable.
At least one major breach will be publicly traced to AI-generated code in 2026. With 42% of committed code AI-generated, 52% of developers not consistently verifying it, and Georgia Tech already tracking 74 confirmed AI-attributed CVEs, the conditions are set.
Automated remediation will move from emerging category to standard line-item budget consideration in a majority of enterprise security programs by 2027. The projection rests on three converging signals: 95% of DevSecOps leaders expect automation as standard practice (ActiveState), 97% plan to consolidate their AppSec stack within a year (Cycode), and 82% carry growing security debt (Veracode). When 95% of leaders agree on the destination and 97% are actively dismantling the current stack to get there, the budget reallocation follows within 18-24 months.
The data doesn't tell organizations what to buy. It tells them that the current approach — more scanners, more alerts, more manual review — is structurally incapable of keeping pace.
The organizations that figure out how to close the creation-remediation gap first will compound their advantage. The rest will compound their debt.
This analysis synthesizes quantitative findings from 30 security industry reports published between November 2025 and April 2026, organized into four categories:
Flagship Threat Reports (9): Verizon DBIR, Mandiant M-Trends, CrowdStrike Global Threat Report, IBM X-Force, Microsoft Digital Defense, Sophos Active Adversary, Unit 42 IR Report, Fortinet Threat Landscape, FIRST Vulnerability Forecast.
AppSec & DevSecOps (9): Veracode SoSS, Datadog State of DevSecOps, GitLab DevSecOps, Orca State of AppSec, ActiveState Container Security, Cobalt State of Pentesting, OX Security AppSec Benchmark, GitGuardian Secrets Sprawl, BeyondTrust Microsoft Vulnerability Report.
AI Code Security (7): Endor Labs/Carnegie Mellon, SonarSource Developer Survey, Cycode AI-Era Product Security, Opsera AI Coding Impact, Georgia Tech Vibe Security Radar, Arkose Labs Agentic AI, Salt Security AI/API.
Vulnerability & Supply Chain (5): Mondoo State of Vulnerabilities, ReversingLabs SSCS, DigiCert Software Supply Chain, IANS State of the CISO, plus cross-cutting data from the categories above.
We extracted data points from each report's key findings, statistical tables, and detailed analysis sections. Stats are attributed to their source report by name, publisher, and date throughout. Where reports present conflicting data, both figures are included with reconciliation analysis (Section 8).
Limitations:
Every report in this dataset was produced by a vendor or institution with potential bias toward their product category or research focus.
Dataset sizes, methodologies, and time periods vary across reports.
Survey sample sizes range from 250 (ActiveState) to 3,266 (GitLab).
Telemetry datasets range from 661 cases (Sophos) to 22,052 incidents (Verizon).
Academic research (Georgia Tech, Carnegie Mellon) uses controlled benchmarks with different validity constraints than industry surveys.
This meta-analysis identifies cross-report patterns and flags contradictions; it does not independently validate individual findings.
How to use the source links: Every statistic in this analysis is attributed to its source report by publisher and date, with an inline link to the original publication. For readers who want to verify a specific claim, click the publisher name in the relevant section to access the original report.

If the patterns above resonated, these Pixee posts go deeper on specific threads in the data:
Pixee: The 274x AI Code Security Problem — why AI code volume × security debt × triage capacity creates a 274× backlog multiplier.
Pixee: The Hidden Cost of AppSec Team Time on Triage — quantifying the 80% of AppSec team time burned on false-positive triage.
Pixee: Your Security Backlog Is a Solvable Problem — a four-step plan to close the creation-remediation gap.
The briefing security leaders actually read. CVEs, tooling shifts, and remediation trends — distilled into 5 minutes every week.
Join security leaders who start their week with AppSec Weekly. Free, 5 minutes, no fluff.
First briefing drops this week. Check your inbox.
Weekly only. No spam. Unsubscribe anytime.