AppSec Weekly Content Briefing
December 3, 2025

98% of Companies Deploy AI Agents, 79% Have No Security Policy | Nov 27 - Dec 3

11 min

Big Picture

EMA study reveals enterprises racing to deploy autonomous AI agents without the governance frameworks to secure them.

This week's AI-centric security insight comes in the form of data around how AI agent deployment is outpacing internal guardrails for security infrastructure and the compliance policies designed to manage it.

The study highlights that 78% of companies are deploying AI agents with zero AI security policies in place (details below).

Combined with all the coverage over the last few weeks on developer/IDE attacks and prompt injection success this is quickly becoming ground zero for CISOs scrambling to overcome their blind spots.

TL;DR

98% of enterprises deploy AI agents but 79% without written policies deployed anyway; EMA warns of "agency abuse" attacks in 2026
Fragmented tooling creates 4-week MTTR for critical vulnerabilities; teams juggle 4 detection tools average while 56% report staff strain
Google patches two actively exploited Android zero-days (CVE-2025-48633, CVE-2025-48572) among 107 total December bulletin vulnerabilities
This week: 120 vulnerabilities disclosed | 3 actively exploited | 2 zero-days

The AI Agent Governance Gap

The EMA study data paints a stark picture of AI deployment vs. security policies:

98% adoption among companies with 500+ employees

79% policy gap: Organizations without written policies deployed anyway

41% IAM dissatisfaction: Enterprises report security concerns with current identity providers

56-45% cost concerns: Mid-size (56%) and large enterprises (45%) cite unpredictable pricing

One of the core issues here is identity and authentication. Current IAM infrastructure simply wasn't designed to authenticate software agents operating without human oversight, yet it seems like many organizations assume existing identity management can absorb agent identity requirements.

So far that does not seem true.

Takeaways

Experts warn 2026 brings "agency abuse" attacks where threat actors manipulate AI agents with excessive permissions into destructive actions. If you're deploying agents, the question isn't "do we have a policy" but "does our identity infrastructure actually support non-human autonomous actors."

Fragmented Tooling Persists

We've covered (and have partially built Pixee to fix) how tool sprawl hinders security teams' ability to remediate backlogs and keep pace with new AI copilots. This showed up two weeks ago in the CISO burnout report coverage.

This week a Hackuity vulnerability management report puts specific data to what so many of us face internally:

4 detection tools average per organization (cloud/container audits most common at 85%)

4-week MTTR for critical issues

97% have remediation SLAs but half still miss them

56% automation gap: Only 56% report automated vulnerability management

56% staff strain from rising vulnerability volume

43% operational limitations and 41% budget pressures as primary constraints

Takeaways

We all know this fragmented approach creates visibility challenges and inconsistent prioritization. This data shows that tool sprawl creates measurable remediation delays when vulnerability volume is accelerating. The 4-week MTTR for critical issues means most teams fix last month's vulnerabilities while this month's pile up. Organizations with automation demonstrably outperform those without.

AI Coding Tools as Attack Surfaces

The tools developers use to build AI agents are becoming security liabilities. This week had coverage of a few new examples:

OpenAI Codex CLI vulnerability enables malicious code injection through trusted configuration files. Attackers can steal data, gain control, and spread attacks through software projects. It's a prompt injection variant where the AI tool ingests poisoned inputs from repository files developers assume are safe.

Google Antigravity drew security researcher warnings about risks in the agentic development platform related to their ability to handle adversarial inputs.

Security researcher Adam Chester published detailed Claude Code analysis showing how AI coding tools can be manipulated through context windows.

Takeaways

AI tools accelerating code production simultaneously expand the attack surface. Instead of racing ahead on deployment without infrastructure and compliance policies we need to continue to treat AI coding assistants like what they are: powerful automation with expanding attack vectors. Audit data access, implement context isolation where possible, and apply heightened review for AI-touched repositories.

Android Zero-Days and Active Funding in the Space

Google's December 2025 Android bulletin patched 107 vulnerabilities including two actively exploited zero-days (CVE-2025-48633, CVE-2025-48572).

The patch volume is the new normal. What's notable: competitive funding activity validating automated remediation as category imperative.

Zafran Security raised $60M Series C for AI-powered automated fixing ($130M total)

AWS Transform launched for automated code modernization

Clover Security raised $36M for design flaw detection

Takeaways

There's a lot of approaches and needs to cover security across the SDLC. What's not in question is how important it is.

Vulnerabilities in the Wild

Actively Exploited:

CVE-2025-48633 (Google Android) Kernel vulnerability with targeted exploitation confirmed Status: Actively Exploited, Patch Available (December 2025 Bulletin) Source

CVE-2025-48572 (Google Android) Kernel vulnerability with targeted exploitation confirmed Status: Actively Exploited, Patch Available (December 2025 Bulletin) Source

OpenAI Codex CLI Vulnerability (OpenAI Codex CLI) Prompt injection enabling arbitrary code execution via configuration file poisoning Status: Disclosed, Mitigation Available Source

Critical/High Severity:

Google Android December 2025 Bulletin (Multiple CVEs) 107 total vulnerabilities including kernel, framework, and component flaws Source

Google Antigravity Security Issues (Google Antigravity) Security researchers warn of risks in AI coding tool trust model Source

Firefox WebAssembly Flaw (Mozilla Firefox) WebAssembly implementation flaw put 180 million users at risk, undetected for 6 months Source

Glassworm Malware Third Wave (VS Code Extensions) Third wave of malicious VS Code packages deploying Glassworm malware Source

GitHub Actions Supply Chain Attack (GitHub Actions) tj-actions/changed-files compromise exposed CI/CD secrets across 23,000+ repositories Source

PluckCMS 4.7.10 File Upload (PluckCMS) Unrestricted File Upload enabling remote code execution Source

Piwigo 13.6.0 SQL Injection (Piwigo) SQL Injection vulnerability in photo gallery software Source

phpIPAM Multiple Vulnerabilities (phpIPAM 1.6/1.5.1) SQL Injection and Reflected XSS vulnerabilities Source

openSIS Community Edition 8.0 SQL Injection (openSIS) SQL Injection in student information system Source

Your Curated Weekly Reading List

Thought-Provoking:

OAuth Isn't Enough For Agents Why it's worth your time: Deep technical analysis of why current authentication frameworks fail for AI agents. Directly extends the governance gap theme with architectural specifics most briefings miss.

Treating MCP like an API creates security blind spots Why it's worth your time: Model Context Protocol security analysis from MCP Manager creator. Connects AI agent governance to the specific tooling developers actually use.

The Era of the Zombie Tool Why it's worth your time: Caleb Sima on security tool sprawl creating "zombie tools" that consume budget without delivering value. Pairs with Hackuity's 4-tool fragmentation data.

Current Events:

GitHub Actions Supply Chain Attack: tj-actions/changed-files Incident Why it's worth your time: Unit42's technical breakdown of how attackers compromised the tj-actions/changed-files action, exposing CI/CD secrets across 23,000+ repositories.

Undetected Firefox WebAssembly Flaw Put 180 Million Users at Risk Why it's worth your time: Mozilla's purpose-built regression testing missed this flaw for 6 months. Demonstrates testing limitations even at mature organizations.

An Evening with Claude (Code) Why it's worth your time: Security researcher Adam Chester's hands-on analysis of Claude Code attack surfaces. Original research showing how AI coding tools can be manipulated.


Looking to Stay Up to Date with All Things AppSec?

Subscribe to the Weekly AppSec Briefing and never miss a thing.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.