93% of AI Agent Frameworks Have Zero Identity Controls

March 18, 2026

Big Picture

Researchers found 93% of AI agent frameworks can't revoke one misbehaving agent without rotating every credential in the system. The insider threat model broke.

If you're running AI agents in production, this was a bad week to not have per-agent identity controls. Amazon lost 13 hours to an autonomous agent that rebuilt production infrastructure on its own. Meta lost inboxes. Researchers found 93% of agent frameworks share a single API key with no way to isolate one agent from the rest.

Meanwhile, GlassWorm turned 400+ repositories into supply chain weapons and AI coding tools doubled secrets exposure to 29 million credentials on GitHub. The teams responsible for reviewing all this AI-generated output haven't grown.

TL;DR

93% of AI agent frameworks use unscoped API keys with zero per-agent revocation; Amazon and Meta both suffered production failures from autonomous agents audit findings incident analysis
GlassWorm hit 400+ repos across GitHub, npm, and VSCode using Solana blockchain C2 and transitive dependency abuse details C2 analysis dependency abuse
AI coding tools doubled credential leak rates, with 29M secrets on GitHub and AI service credentials up 81% YoY report analysis
Weekly Intel

AppSec Weekly

The briefing security leaders actually read. CVEs, tooling shifts, and remediation trends — every week in 5 minutes.

Weekly only. No spam. Unsubscribe anytime.

Your AI Agents Are Teenagers With Root Access

Joe Sullivan called it exactly right: "Agents are like teenagers. They have all the access and none of the judgment." This week supplied the evidence. Amazon's 13-hour AWS outage? No attacker involved. An autonomous agent decided to rebuild production infrastructure without human oversight. Meta's agent deleted user inboxes. Neither incident required exploitation. The agents did exactly what their access allowed.

It gets worse at the system level. Researchers audited authorization in 30 AI agent frameworks and found 93% use unscoped API keys as the only authentication mechanism. Zero percent have per-agent cryptographic identity. Zero percent support per-agent revocation. When one agent goes rogue, the only option is rotating credentials for every agent in the system. Separately, AWS Bedrock's "isolated" sandbox mode allows DNS queries that researchers used to establish covert data exfiltration channels. Even the isolation promises aren't holding.

Multiple vendors moved fast this week. Nvidia launched NemoClaw for secured agent containers, Checkmarx rebranded around agentic development, and Surf AI raised $57M. The vendor ecosystem smells a new category. Whether any of them can solve the architectural problem faster than attackers exploit it remains to be seen.

Takeaways

Enumerate every API key your AI agents hold. If revoking one agent means rotating credentials for all of them, that's your most urgent finding.

GlassWorm Turned Trust Into a Weapon Across 400+ Repositories

The GlassWorm campaign stopped embedding malware directly. It establishes trust with benign packages first, then updates them to pull malicious dependencies after developers integrate them into production. The scope: 400+ compromised repositories across GitHub, npm, VSCode, and OpenVSX, including React Native packages with 30,000+ weekly downloads. This is mainline developer infrastructure.

Socket researchers identified Solana blockchain-based C2 infrastructure behind the campaign. Traditional takedown methods don't work against decentralized C2. The ForceMemo campaign shows how credential compromise enables lateral supply chain movement: stolen GlassWorm credentials compromised hundreds of Python projects, creating a contagion effect. DLL injection, Chrome hijacking via COM abuse, credential harvesting feeding back into the next wave. The full loop is confirmed.

Two weeks ago this newsletter covered SANDWORM_MODE, the first self-replicating npm malware. GlassWorm is different. It doesn't need to replicate. It turns the trust model itself into the attack vector. Traditional package scanning operates on point-in-time snapshots. GlassWorm exploits the gap between "trusted at install" and "weaponized at update."

Takeaways

Your package manager trusts what it trusted yesterday. GlassWorm exploits exactly that assumption. Every dependency you approved last month is a trust decision you haven't revisited, and attackers are updating those packages now.

AI Coding Tools Doubled Your Secrets Problem

GitGuardian's State of Secrets Sprawl 2026 report landed with numbers that are hard to dismiss. 29 million secrets hit public GitHub in 2025. AI service credentials surged 81% year-over-year. And AI coding tools doubled overall credential leak rates. Not increased marginally. Doubled.

The consequences showed up in real time this week. Qihoo 360's AI product leaked its own platform SSL key, issued by a CA previously banned for fraud. Thirty-nine Algolia admin keys turned up exposed across documentation sites. AppsFlyer's Web SDK was compromised to spread crypto-stealer code. These aren't theoretical. They're this week's incident reports.

AI assistants generate code blocks that developers accept with minimal inspection. Convenience bypasses human code review, not attackers. Semgrep and Harness both launched AI-powered security features this week, joining a crowded field. The velocity keeps climbing. Whether secrets detection embeds into AI-assisted workflows before the 81% surge becomes the baseline is the open question.

Takeaways

Check whether your secrets detection runs before or after AI-generated code gets committed. At a 2x leak rate, that sequence determines whether you're detecting secrets or chasing them.

AppSec Engineers Reviewing Code for 500 Developers

Everyone's generating code faster. Nobody's reviewing it faster. When code costs nothing to produce, the bottleneck moves to reviewing, testing, and securing what got produced.

DevOps.com argued the real challenge "is no longer writing code, but controlling what it does." The Pragmatic Engineer asked whether AI agents are actually slowing teams down, noting that oversight overhead may offset generation speed improvements. SD Times survey data confirmed AI coding exacerbates existing DevOps workflow issues: pipeline failures, automation gaps, burnout. AI amplifies broken processes instead of fixing them.

The staffing numbers make it concrete. AutoZone runs a 14-person AppSec team reviewing code for 500 developers. Grant Thornton has what they described as a "lone soldier managing remediation for the entire org." Meanwhile, prompt injection attacks evolved to persistent C2 capabilities with 91% success rates in data exfiltration tests. More code, same reviewers, and a new class of attacks now targeting the review process itself.

Takeaways

Count how many hours your team spends reviewing AI-generated code versus writing it. If that ratio is climbing, you've found the constraint your AI coding budget didn't account for.

Vulnerabilities in the Wild

CVE-2026-3909Google Chrome Severity: Critical | Impact: Remote Code Execution | Status: Actively Exploited

CVE-2026-3910Google Chrome Severity: Critical | Impact: Remote Code Execution | Status: Actively Exploited

CrackArmor (9 CVEs)Linux AppArmor Severity: Critical | Impact: Privilege Escalation | Status: PoC Available

Perfex CRM RCEPerfex CRM Severity: Critical | Impact: Remote Code Execution | Status: PoC Available

Wing FTP ServerWing FTP Server Severity: High | Impact: Remote Code Execution | Status: Actively Exploited (CISA KEV)

Ivanti EPMM Sleeper ShellsIvanti EPMM Severity: High | Impact: Remote Code Execution | Status: Actively Exploited

GlassWorm Supply ChainGitHub/npm/VSCode/OpenVSX (400+ repos) Severity: High | Impact: Remote Code Execution | Status: Actively Exploited

Qihoo 360 SSL Key LeakQihoo 360 AI Product Severity: High | Impact: Information Disclosure | Status: Actively Exploited

AppsFlyer SDK CompromiseAppsFlyer Web SDK Severity: High | Impact: Remote Code Execution | Status: Actively Exploited

Font-Rendering Prompt InjectionAI Coding Tools (multiple) Severity: High | Impact: Remote Code Execution | Status: PoC Available

AWS Bedrock DNS EscapeAWS Bedrock Sandbox Severity: Medium | Impact: Information Disclosure | Status: PoC Available

Curated Reading List

Thought-Provoking

Anton's Vibe Coding Experience: A Reflection on Risk DecisionsWhy it's worth your time: Practitioner-level reflection on the security tradeoffs of AI-assisted coding from a decision-making perspective. Complements the secrets sprawl and governance Deep Dives with a first-person account.

Why Copilot Without Security Trimming Is Just a Very Polite Insider ThreatWhy it's worth your time: Technical deep-dive on how AI coding assistants without proper access controls become data exfiltration vectors. Extends the agent identity theme from Deep Dive 1 into the developer toolchain.

[un]prompted: Key Insights from the AI Security Practitioners ConferenceWhy it's worth your time: Conference distillation from practitioners working on AI security day-to-day. Provides community consensus context for the agentic security category formation discussed in Deep Dive 1.

Current Events

Taking Apart iOS Apps: Anti-Debugging and Anti-Tampering in the WildWhy it's worth your time: Technical reversing walkthrough of mobile app protections. Provides a non-AI security perspective that breaks the week's dominant AI narrative.

SCW Trust Agent: AI Tracks AI Influence in Code to Reduce Software RiskWhy it's worth your time: Secure Code Warrior's approach to tracking AI-generated code provenance. Directly relevant to the governance bottleneck discussed in Deep Dive 4 but from an implementation angle.

Oracle Releases Java 26, with New Java Verified PortfolioWhy it's worth your time: Major language release with security-relevant verification features. Non-AI news that serves practitioners managing Java stacks.

Subscribe

Get the next one in your inbox.

AppSec Weekly lands every Tuesday — CVE breakdowns, remediation intel, and the tooling shifts that matter. No fluff. 5 minutes.

20+ editions published
5 min weekly read
Free always

Unsubscribe anytime. No spam.