Happy New Years. 2025. What an AI-fueled ride.
We published a top 10 stories from the 10 weeks we've been running this AppSec Weekly.
If you scan the list, there's a lot. From China using Claude to automate attacks, MCPs and dev tools becoming a new attack vector, to massive vulnerabilities like React2Shell.
This Christmas to New Years week continued the torrid pace with MongoBleed coming to light Christmas Day across 87,000 exposed MongoDB servers.
In other news OWASP published its first Agentic AI Top 10 framework, formally acknowledging AI systems operating inside infrastructure break traditional perimeter assumptions. And multiple surveys and new data showed what we probably already all know: AI coding tools still struggle with production engineering, especially as it relates to generating secure code.
MongoBleed (CVE-2025-14847) hit Christmas Day, and attackers started exploiting it within hours. The memory leak in MongoDB's zlib compression lets unauthenticated attackers exfiltrate credentials, session tokens, and API keys from uninitialized memory.
Security researcher Kevin Beaumont confirmed the severity: "You can just supply an IP address of a MongoDB instance and it'll start ferreting out in memory things such as database passwords."
The scale: 87,000+ internet-exposed MongoDB instances vulnerable, with a public proof-of-concept making exploitation accessible to anyone paying attention. CISA added it to the KEV catalog and mandated federal agencies remediate by January 19, 2026.
The pattern is familiar now. React2Shell exploited within hours. IngressNightmare hitting Kubernetes at scale. MongoBleed on Christmas morning. The disclosure-to-exploitation window has compressed beyond what traditional patching workflows can handle.
OWASP released its first Agentic AI Top 10 framework this week. The core message: AI agents operating autonomously inside your infrastructure break perimeter security assumptions.
The architectural analysis is clear: "AI systems don't just generate responses, they take action. Agents trigger workflows, call APIs, update records, fan out across services." Traditional edge security was designed for threats from outside. Agentic AI operates inside the mesh where those controls have limited visibility.
This isn't theoretical. The 39C3 presentation on exploiting AI coding agents showed practical exploits against AI systems that execute code and interact with infrastructure autonomously.
When AI agents trigger internal API calls and execute code without human approval, the attack surface moves to wherever those agents have access. Juxtaposed to our previous coverage of how orgs dont have AI policies in place we continue to grapple with new security surfaces.
Three reports this week highlighted that (for now) AI coding tools may be creating more work than they eliminate, even if the nature of that works shifts.
Survey findings show AI tools are "increasing the blast radius of bad code that needs to be debugged." Multiple sources validated the pattern: AI accelerates code output while quality and security remain questionable.
OpenAI's own SWE-Lancer benchmark demonstrated that even frontier models struggle with real-world engineering tasks. The benchmark tested practical work rather than isolated coding exercises. The gap between "can generate code" and "can ship production software" remains significant.
Analysis of 153 million lines of code reinforced the pattern: AI tools may accelerate development "at the expense of maintainable, quality code."
The tools generating code faster aren't generating code that's easier to maintain, secure, or debug.
Eight cybersecurity acquisitions exceeded $1 billion in 2025, totaling over $84 billion. The headline deals: Google's $32B Wiz purchase and Palo Alto Networks' $25B CyberArk acquisition. The Palo Alto-Google Cloud strategic alliance signals how platform vendors are positioning around AI security.
When $84B changes hands in a year and the largest deals involve platform plays, expect the vendors you evaluate to look different in 12 months.