98% of Companies Deploy AI Agents, 79% Have No Security Policy | Nov 27 - Dec 3

December 3, 2025

Big Picture

EMA study reveals enterprises racing to deploy autonomous AI agents without the governance frameworks to secure them.

This week's AI-centric security insight comes in the form of data around how AI agent deployment is outpacing internal guardrails for security infrastructure and the compliance policies designed to manage it.

The study highlights that 78% of companies are deploying AI agents with zero AI security policies in place (details below).

Combined with all the coverage over the last few weeks on developer/IDE attacks and prompt injection success this is quickly becoming ground zero for CISOs scrambling to overcome their blind spots.

The AI Agent Governance Gap

The EMA study data paints a stark picture of AI deployment vs. security policies:

98% adoption among companies with 500+ employees

79% policy gap: Organizations without written policies deployed anyway

41% IAM dissatisfaction: Enterprises report security concerns with current identity providers

56-45% cost concerns: Mid-size (56%) and large enterprises (45%) cite unpredictable pricing

One of the core issues here is identity and authentication. Current IAM infrastructure simply wasn't designed to authenticate software agents operating without human oversight, yet it seems like many organizations assume existing identity management can absorb agent identity requirements.

So far that does not seem true.

Takeaways

Experts warn 2026 brings "agency abuse" attacks where threat actors manipulate AI agents with excessive permissions into destructive actions. If you're deploying agents, the question isn't "do we have a policy" but "does our identity infrastructure actually support non-human autonomous actors."

Fragmented Tooling Persists

We've covered (and have partially built Pixee to fix) how tool sprawl hinders security teams' ability to remediate backlogs and keep pace with new AI copilots. This showed up two weeks ago in the CISO burnout report coverage.

This week a Hackuity vulnerability management report puts specific data to what so many of us face internally:

4 detection tools average per organization (cloud/container audits most common at 85%)

4-week MTTR for critical issues

97% have remediation SLAs but half still miss them

56% automation gap: Only 56% report automated vulnerability management

56% staff strain from rising vulnerability volume

43% operational limitations and 41% budget pressures as primary constraints

Takeaways

We all know this fragmented approach creates visibility challenges and inconsistent prioritization. This data shows that tool sprawl creates measurable remediation delays when vulnerability volume is accelerating. The 4-week MTTR for critical issues means most teams fix last month's vulnerabilities while this month's pile up. Organizations with automation demonstrably outperform those without.

AI Coding Tools as Attack Surfaces

The tools developers use to build AI agents are becoming security liabilities. This week had coverage of a few new examples:

OpenAI Codex CLI vulnerability enables malicious code injection through trusted configuration files. Attackers can steal data, gain control, and spread attacks through software projects. It's a prompt injection variant where the AI tool ingests poisoned inputs from repository files developers assume are safe.

Google Antigravity drew security researcher warnings about risks in the agentic development platform related to their ability to handle adversarial inputs.

Security researcher Adam Chester published detailed Claude Code analysis showing how AI coding tools can be manipulated through context windows.

Takeaways

AI tools accelerating code production simultaneously expand the attack surface. Instead of racing ahead on deployment without infrastructure and compliance policies we need to continue to treat AI coding assistants like what they are: powerful automation with expanding attack vectors. Audit data access, implement context isolation where possible, and apply heightened review for AI-touched repositories.

Android Zero-Days and Active Funding in the Space

Google's December 2025 Android bulletin patched 107 vulnerabilities including two actively exploited zero-days (CVE-2025-48633, CVE-2025-48572).

The patch volume is the new normal. What's notable: competitive funding activity validating automated remediation as category imperative.

Zafran Security raised $60M Series C for AI-powered automated fixing ($130M total)

AWS Transform launched for automated code modernization

Clover Security raised $36M for design flaw detection

Takeaways

There's a lot of approaches and needs to cover security across the SDLC. What's not in question is how important it is.