VulnOps · Mythos-Ready

Automated remediation pipelines, in production since 2024.

The window between discovery and weaponization has collapsed into hours. A permanent acceleration, not a temporary spike.

Time-collapse curve: disclosure-to-weaponization window from 2.3 years in 2019 to less than 1 day in 2026 2018 2019 2021 2024 2026 2.3 years 2018 pre-AI baseline 2019 2.3 years 2024 56 days 2026 < 1 day Source: Zero Day Clock (Sergej Epp / Sysdig); CSA × SANS × [un]prompted × OWASP, April 2026
The disclosure-to-weaponization window has collapsed from 2.3 years in 2019 to 56 days in 2024 to less than 1 day in 2026. Source: Zero Day Clock (Sergej Epp / Sysdig); Cloud Security Alliance, SANS, [un]prompted, OWASP joint paper, April 2026.

Cited framework: The AI Vulnerability Storm — Building a Mythos-Ready Security Program. CSA × SANS × [un]prompted × OWASP GenAI joint paper, April 2026. 17 contributing authors. 70+ named CISO reviewers from Google, Sysdig, Cloudflare, Sophos, Wells Fargo, Rivian, NFL, TransUnion, Justworks, lululemon, GitLab, Atlassian, Salesloft, FDIC.

Trusted by security teams at regulated enterprises

76%

Pull-request merge rate

Up to 95%

False-positive reduction

12

Native scanner integrations

2024

In production since

Risk → Priority Action → Pixee receipt

What the paper says is on fire. What Pixee already operationalizes.

Pixee operationalizes the remediation half of the CSA's Mythos-Ready plan — Priority Actions 1, 5, 6, 10, and 11 — running automated remediation pipelines in production at named customers since 2024.

The CSA / SANS / [un]prompted / OWASP joint paper publishes a 13-item risk register. Four of those risks are CRITICAL or HIGH severity and map cleanly to operational categories Pixee has been shipping since 2024. The cross-walk below reads paper risk → mapped Priority Action → Pixee operational receipt → footnote anchor.

"Detection and response capabilities have not yet been upgraded to match. Alert triage volumes, SIEM correlation speed, and containment authorization latency were designed for human-paced threats."

Risk 4 · p. 16

PA 10 — HIGH

Pre-authorized remediation playbooks generate context-aware pull requests in the same window the scanner flags the finding. The fix lands as a reviewable PR — not an alert in a queue — with codebase-conforming patterns, dependency-aware imports, and existing test-suite compatibility. Machine-speed fix execution as the paper describes it.

"Stakeholder decisions based on pre-AI risk models. Security reporting metrics built on pre-AI assumptions about exploit timelines and attack complexity may no longer reflect actual exposure."

Risk 5 · p. 16

PA 6 — CRITICAL

Exploitability-based triage replaces severity-only scoring. Each finding is evaluated for reachability from untrusted input, deployment context, and the presence or absence of compensating controls — then ranked by what an attacker can actually weaponize, not by a static CVSS string. The output is a risk model that survives a board-level question about real exposure.

"Code produced by both humans and AI agents ships without consistent security review. Without LLM-driven review integrated into the pipeline, exploitable flaws reach production before defenders can find them."

Risk 7 · p. 17

PA 1 — CRITICAL

LLM-driven security review runs at PR-merge time across human-written and AI-generated code, ingesting from 12 native scanner integrations and any SARIF v2.1.0 producer. Findings are triaged before they reach a developer; remediable findings ship back to the same PR as a fix proposal. The pipeline never stops moving.

"Quarterly pen tests and reactive patching cycles cannot keep pace with continuous AI-driven discovery. Existing CVE/NVD infrastructure and patch prioritization workflows were built for dozens of critical CVEs per month, not hundreds."

Risk 9 · p. 17

PA 11 — CRITICAL

Pixee is the operational substrate the paper's VulnOps definition explicitly requires — "continuous discovery of zero-day vulnerabilities across your entire software estate, and automated remediation pipelines." Discovery feeds the pipeline; the pipeline produces merge-ready fixes; the fixes ship as PRs against your repository. This is the function VulnOps describes, running today, at named customers, since 2024.

Honest scope

The credibility lever: what we claim and what we don't.

The paper lists 11 Priority Actions. Pixee operationalizes 5 of them. The other 6 belong to other categories — agent security, network architecture, organizational governance, deception, asset management, AI workforce adoption — and we plug into them rather than competing with them. CISOs and Heads of AppSec told us directly that hearing a vendor claim "we cover everything in the Mythos-Ready plan" is the fastest way to lose the room. So we're going to say what we cover and what we don't, on the same page, before you ever scroll to a form.

What we operationalize

PA 1 — Point Agents at Your Code and Pipelines.

LLM-driven security review at PR-merge time, ingesting from 12 native scanner integrations plus universal SARIF fallback. In production at customer deployments since 2024.

PA 5 — Prepare for Continuous Patching.

Automated remediation generates context-aware pull requests that match codebase conventions, pass existing test suites, and respect dependency constraints. Built for the patch-flood the paper anticipates as Project Glasswing's 40-plus vendors begin disclosing.

PA 6 — Update Risk Models and Reporting.

Exploitability-based triage replaces severity-only scoring, surfacing the findings that actually map to a reachable attack path versus the findings that look critical on paper but live behind two segmentation boundaries.

PA 10 — Build an Automated Response Capability.

Pre-authorized remediation playbooks execute at machine speed without waiting for human triage to complete. The fix arrives in the developer's PR queue, ranked, contextualized, and ready to merge.

PA 11 — Stand Up VulnOps.

Pixee is the operational substrate the paper's VulnOps definition requires: continuous discovery feeding automated remediation pipelines, scoped to your specific code, deployments, and conventions.

What we don't claim

PA 2 — Require AI Agent Adoption

is workforce-wide AI enablement. Broader than AppSec; not our category.

PA 3 — Defend Your Agents

is the agent security category. Vendors in that space focus on prompt injection, agent identity, and runtime guardrails. Pixee plugs into the discovery layer; we don't compete on agent runtime defense.

PA 4 — Establish Innovation, Acceleration Governance

is a process discipline, not a tooling category. The right answer is a cross-functional security/legal/engineering committee, not a SaaS subscription.

PA 7 — Inventory and Reduce Attack Surface

is the attack-surface management category. We integrate with ASM tools as discovery sources; we don't replace them.

PA 8 — Harden Your Environment

is network segmentation, egress filtering, phishing-resistant MFA, and Zero Trust. Foundational and out-of-scope for an AppSec automation platform.

PA 9 — Build a Deception Capability

is the deception / canary / honey-token category. Different control class, different vendor space, complementary to remediation but not what Pixee does.

PA 1 vendor space — autonomous vulnerability discovery models — is held by the upstream AI access governance category. We plug into the discovery layer; we don't compete on novel-vulnerability research. That's a feature of how we scope ourselves, not a hedge. The remediation pipeline is the bottleneck the paper's VulnOps definition was written to address, and that's the territory we operationalize.

Receipts

Verifiable proof. Six tiles. Click through to the source.

Every claim below links to a primary source — a Pixee blog post, a public scanner integration manifest, a CSA paper page, or a public GitHub PR. Nothing self-attested without a verifiable artifact behind it.

2024 In production since

Operating in production since 2024

Pixee has been running automated remediation in production at named customer deployments since 2024. The deployments precede the CSA paper by more than a year and the Mythos disclosure by sixteen months. This is not a roadmap claim.

Pixee customer story library
12 Native scanner integrations

Plus universal SARIF v2.1.0 fallback

GitHub CodeQL, Snyk (SAST + SCA), Semgrep, Checkmarx, Veracode, SonarQube, GitLab SAST, HCL AppScan, Synopsys Polaris, Aqua Trivy, DefectDojo, Datadog SAST. Each backed by a dedicated scanner-aware handler with first-class metadata extraction. Plus universal SARIF v2.1.0 ingestion for everything else (including Arnica SAST, Black Duck, Mend, and Anchore).

Full scanner integration matrix
76% Pull-request merge rate

Context-aware fixes that pass your tests

Context-aware fixes match your code's conventions, pass your existing tests, and respect your dependency constraints. The result is a merge rate categorically higher than what generic AI fix generators have published — and merged fixes are the only fix that closes a finding.

Pixee blog: 76% Merge Rate — Why Purpose-Built Security Fixes Work
Up to 95% False-positive reduction

Three-tier exploitability triage

Three-tier triage (rule-based structured, agentic, adaptive on-the-fly analyzer generation) eliminates findings that aren't reachable from untrusted input, aren't exposed past existing compensating controls, or aren't exploitable in the deployed configuration. The output is the small set of findings a human actually needs to look at.

Pixee blog: Triage Automation Playbook — From 2,000 Alerts to 50 Fixes
4 of 13 CRITICAL / HIGH risks mapped

Cited risk-register language, mapped to product

Four of the paper's CRITICAL and HIGH risks (Risks 4, 5, 7, 9) map directly to Pixee's PA 1 / 5 / 6 / 10 / 11 operational coverage. The risk-anchored coverage map above (Section 2) shows the cross-walk with paper-quote citations.

The AI Vulnerability Storm — full paper
Public GitHub PR history since 2023

Reviewable artifacts. No anonymization.

Pixee has been opening security pull requests against public open-source repositories since 2023. Every PR is a public artifact: bug class, language, scanner that flagged it, fix code, merge timeline. No legal redaction required. The PR list lives in Section 7 of this page.

Section 7 — See it in code

Mythos shift, in eight slides

The Mythos shift, in eight slides.

The eight-slide carousel below distills the operational reality CISOs are describing in private conversations: the velocity collapse, the volume increase, the asymmetry between attacker tooling and defender tooling, and the specific pieces of the Mythos-Ready Security Program that are bottlenecked on remediation rather than discovery. The deck is built from quote-walls (paper authors and named CISO reviewers) plus the operational receipts from Section 4. Swipe through it the way you'd swipe through a LinkedIn document post — it's identical to the version we deploy organically there.

How Pixee plugs into your stack

How Pixee plugs into your stack.

Six rows below; full matrix at pixee.ai/integrations. Pixee's "12 native + universal SARIF" claim isn't marketing language wrapped around generic ingestion — each native scanner has a dedicated handler that knows its rule taxonomy, severity scheme, and finding metadata, and normalizes them without losing scanner-specific context. SARIF-fallback covers everything else.

Scanner Ingest method Triage Auto-Remediation Integration depth
GitHub CodeQL (GitHub Advanced Security) Native API + SARIF Yes — dedicated handler, rule-aware classification Yes — full codemod + AI fix coverage Native
Snyk (SAST + SCA) Native API + SARIF Yes — severity normalization + CVE-level analysis Yes — full codemod + dependency-bump fixes Native
Semgrep (Community + Pro) Native API + SARIF Yes — dedicated handler Yes — full codemod + AI fix coverage Native
Checkmarx SARIF Yes — dedicated handler Yes Native
Veracode SARIF Yes — dedicated handler Yes Native
GitHub-imported SARIF (any v2.1.0 producer) SARIF v2.1.0 Yes — universal triage path Yes — coverage depends on bug class + language SARIF-fallback
See full matrix → pixee.ai/integrations

If your scanner produces SARIF v2.1.0 — and most modern SAST and SCA tools do, including most internal scanners — Pixee can ingest it on the universal path today, no new code on your side. Send a sample SARIF file and we'll confirm ingestion within 24 hours.

See it in code

Real PRs. Real repositories. Reviewable today.

The shortest path to verifying what Pixee actually does is to read the code we've shipped. Below are five to ten public pull requests Pixee has opened against open-source repositories — every one of them publicly reviewable, every one of them tagged by language, bug class, and merge status. No anonymization. No legal redaction. No customer dependency. If a PR claim doesn't hold up under inspection, the link breaks the claim.

Browse the public PR footprint: While the curated PR audit is being assembled, the live Pixee public GitHub footprint is browseable today at github.com/pixee. Every Pixee security PR opened against a public open-source repository is an OSS-licensed artifact: bug class, language, scanner that flagged it, fix code, merge timeline, reviewer comments. No anonymization. No legal redaction. No customer dependency.

See Pixee's public GitHub → github.com/pixee

The curated audit — a structured walk-through of 30 to 50 PRs across multiple languages, scanners, and bug classes, with merge timelines, reviewer comments, and side-by-side before/after diffs — ships as the Public PR Audit artifact in the Mythos Bundle later this quarter. The artifact stands without any anchor-paper citation; it's product proof, not authority borrow. Until then, the live GitHub footprint above is the primary verification path.

How we measure these numbers.

Our merge-rate and false-positive-reduction numbers are reported across 12+ active customer deployments. Specifically: 76% merge rate is realized on native single-language deployments (CodeQL + Java/Spring); 70% floor on polyglot multi-scanner. The "up to 95%" false-positive-reduction figure reflects the upper bound seen in CodeQL + Java/Spring cohorts; the median customer experience lands in the 70-95% range depending on scanner mix, language mix, and bug-class distribution. Full denominator definition, customer count, time window, and bug-class stratification publish with our Q3 2026 customer ROI study.

Walk the playbook

Walk the playbook in 30 minutes.

Bring your scanner stack and your top three vulnerability backlog pains. We'll walk through how Pixee triages and remediates against your specific tools, your specific bug classes, and your specific deployment shape. No slideware. No generic demo. Thirty minutes, your scanner output, mappable to action by the end of the call.

What you'll get on the walk-through

  • A live look at how Pixee triages findings from your specific scanner stack — CodeQL, Snyk, Semgrep, Checkmarx, Veracode, or universal SARIF.
  • A walk-through of remediation against your top three bug classes — and the merge-rate we'd realistically expect against them.
  • A concrete next step mapped to your deployment shape — repository count, scanner count, developer seat-license. No generic demo.

30 min · Bring scanner output · No slideware

Cited framework: The AI Vulnerability Storm — Building a Mythos-Ready Security Program. Cloud Security Alliance × SANS Institute × [un]prompted × OWASP GenAI Security Project. Original release April 12 2026. Last updated May 1 2026. Version 1.0. 29 pages. CC BY-NC 4.0. Read the full paper at the CSA artifact page.

Authors: Gadi Evron (CEO Knostic; CISO-in-Residence for AI, Cloud Security Alliance) and Robert T. Lee (Chief AI Officer / Chief of Research, SANS Institute). 17 contributing authors and 70+ named CISO reviewers including Heather Adkins (CISO Google), Phil Venables (Ballistic Ventures, ex-Google Cloud CISO), Jen Easterly (CEO RSAC, former Director CISA), Bruce Schneier (Inrupt; Harvard Kennedy School), Sergej Epp (CISO Sysdig), Joshua Saxe (Security Superintelligence Labs, ex-Meta AI Lead), Mike Johnson (CISO Rivian), Ross McKerchar (CISO Sophos), Tomas Maldonado (CISO NFL), Jason Woloz (CISO TransUnion), Yabing Wang (CISO & CIO Justworks), Josh Lemos (CISO lululemon), Julie Davila (VP Product Security GitLab), David B. Cross (CISO Atlassian), Peter Liebert V (CISO Salesloft), Zachary N. Brown (CISO FDIC).