Your shadow AI problem is coming from the executive floor.
BlackFog research this week found 69% of C-suite executives prioritize AI efficiency over privacy concerns. That percentage makes leadership the primary driver of unsanctioned AI adoption.
Do what we say, not what we do or something like that.
When you add 1.5 million unmonitored but presumambly sanctioned AI agents operating across large enterprises you've got an invisible workforce rife with potential security vulnerabilities.
Meanwhile, the exploitation of supply chain vulnerabilities continues. This week's flavor saw the reveal of how several nation-state actors compromised the tools developers already trust.
Specifically, Chinese APT Lotus Blossom hijacked Notepad++ updates for six months. Meanwhile, OpenVSX publisher accounts got compromised to distribute malware through VS Code extensions. One successful insertion, thousands of environments.
Last, we published last week about automated pentesting having a moment (cheaper, better) in a research study. This week,
BlackFog's research reframes shadow AI as a governance failure, not a user education problem. 49% of employees use unapproved AI tools. The finding that 69% of C-suite executives and 66% of directors prioritize speed over privacy means leadership drives this adoption rather than preventing it.
The data exfiltration pathways multiply. 34% of employees use free versions of company-approved tools, bypassing enterprise security controls entirely. Separately, Gravitee.io research identified 1.5 million unmonitored AI agents across large enterprises—an "invisible workforce" operating outside IT oversight. Gartner projects the Agent Management Platform (AMP) market will reach $15 billion by 2029, with current estimates putting total non-human identities at 8-50 million per enterprise, projected to double by year-end.
The types of data flowing into shadow tools: research and datasets (33%), employee PII (27%), financial and sales data (23%). Gravitee.io reports 88% of firms have experienced or suspected a security incident related to AI agents in the last 12 months. Meanwhile, researchers at Varonis disclosed the "Reprompt" attack and Checkmarx identified "EchoLeak" (CVE-2025-32711)—both demonstrating how attackers bypass AI tool guardrails to exfiltrate sensitive data through prompt manipulation.
Traditional prohibition approaches can't work when executives override security policies for productivity gains. This isn't a training problem—it's structural incentive misalignment.
When the compliance risk sits in the C-suite, technical controls won't solve it. What's your governance model when the people approving AI policy are the ones violating it?
Last week we covered the ARTEMIS study showing AI pentesting costs $18/hour vs $60/hour for humans. This week brings the counterweight: cheaper doesn't mean comprehensive.
Doyensec's 60-person-day audit of Outline OSS compared three AI security platforms against human researchers testing the same codebase. The AI tools missed critical vulnerabilities that manual testing caught, while generating noise that diverted attention from real issues. The $18/hour AI agent may generate significant triage work for every hour it runs if not properly supervised.
Conversely, Help Net Security's evaluation of BugTrace-AI, Shannon, and CAI showed these tools now "mimic human tester workflows" for certain attack classes. AI agents demonstrate superiority in pattern-matching and systematic enumeration—outperforming humans in specific CLI tasks—but exhibit critical failures in complex business logic and GUI-based interactions.
AI security tooling effectiveness varies dramatically by use case and attack sophistication. Some tools excel at pattern-based vulnerabilities. Complex business logic? Still requires human judgment.
AI pentesting is cheap. AI pentesting that finds what humans find is not the same thing. The question isn't whether to use AI security tools—it's understanding which attack classes they handle versus where they create blind spots.
Kaspersky's investigation into the Notepad++ supply chain compromise running June through December 2025 revealed deliberate infrastructure targeting. Chinese state-sponsored Lotus Blossom group hijacked the update mechanism at the hosting provider level, selectively intercepting and redirecting update traffic for six months. The campaign targeted specific IP ranges associated with government and financial institutions.
The strategic logic: attackers prize distribution points because one successful insertion delivers access to thousands of environments simultaneously. Socket.dev's analysis of the GlassWorm attack showed how compromised OpenVSX publisher accounts distributed malware through VS Code extensions to macOS users. eScan Antivirus joined the list of security tools weaponized against their users.
Forrester's analysis calls this the "developer tool blind spot"—tools without enterprise licensing operate outside IT visibility. You can't verify update integrity for software you don't know is installed.
Meanwhile, Russia's APT28 weaponized CVE-2026-21509 (Microsoft Office OLE bypass) within 72 hours of patch release, demonstrating state actors move fast when they want to. But the Notepad++ campaign shows they're equally willing to play the long game when the target is worth it.
Your developer tool inventory has gaps. The tools without licensing—the ones developers install themselves—are exactly what nation-states target. Can you enumerate every auto-updating tool on your developers' machines?
JFrog Security Research discovered Metro4Shell (CVE-2025-11953), actively exploited since December 21, 2025, achieving RCE on both Windows and Linux developer systems. Attackers leverage a critical bug enabling malicious commands via HTTP requests to the Metro development server. The vulnerability stems from improper input validation in the Metro bundler's HTTP endpoint, allowing unauthenticated remote code execution when the server binds to public interfaces.
Despite patches being available, thousands of internet-accessible instances remain vulnerable. This isn't a sophisticated supply chain attack—it's development servers improperly exposed to external networks. VulnCheck's honeypot network detected the first operational exploitation, and multiple security firms confirming exploitation validates this as a widespread campaign.
The target profile: development environments where the Metro server binds to 0.0.0.0 rather than localhost. A dev server that "shouldn't" be reachable from the internet often is. Attackers actively scan for these misconfigurations.
If you run React Native development environments, check network exposure now. The patch exists. The exploitation is active. The scan is probably already happening against your infrastructure.