AI in Cybersecurity Moves from Promise to Proof as WEF and KPMG Track Defender Gains

Artificial intelligence is moving from promise to proof in cybersecurity. Organizations using AI extensively in security have cut average breach costs by $1.9 million and shortened breach lifecycles by about 80 days, according to data anchoring a new World Economic Forum and KPMG white paper released Monday.

The report, “Empowering Defenders: AI for Cybersecurity,” lands at a moment when 94 percent of cyber leaders identify AI as the defining force in their field and 77 percent of organizations already deploy it operationally, figures drawn from the Forum’s Global Cybersecurity Outlook 2026. The shift it documents is concrete: AI is moving from pilot programs and pitch decks into measurable defensive performance across vulnerability detection, threat intelligence triage, phishing analysis, and incident response.

From Pilot to Production

“AI has the potential to shift the balance towards defenders,” said Akshay Joshi, head of the Centre for Cybersecurity at the World Economic Forum. “Organizations that treat it as a strategic capability, rather than a standalone tool, will be better placed to turn growing cyber risk into resilience and competitive advantage.”

The paper, produced in collaboration with KPMG and built on contributions from 105 representatives across 84 organizations and 15 industries, follows the Forum’s 2025 publication that warned about AI cybersecurity risks. The 2026 edition pivots from risk to deployment, examining 20 partner-submitted case studies from companies including Allianz, Aramco, Google, IBM, ING, Microsoft, Santander Group, and Standard Chartered. The metrics in those case studies are self-reported by the submitting organizations, and KPMG, which collaborated with the Forum on the report, also contributed a case study describing efficiency gains in its own threat intelligence operations.

Numbers Worth Knowing

The reported case-study metrics are notable, but they should be read as organization-submitted performance indicators rather than independently audited industry benchmarks.

IBM said its Autonomous Threat Operations Machine, or ATOM, launched in April 2025, handles about 95 percent of daily security investigations at the company’s managed security services arm, automating over 850 analyst hours each month and cutting end-to-end investigation time by 37 percent. Accenture deployed an AI capability called Agent Oliver across over 100,000 internet-facing sites; analysis time per site dropped from about 15 minutes to under one minute, a 93 percent reduction in manual effort. KPMG’s threat intelligence team reported a 25 percent increase in operational efficiency after introducing a custom AI model trained on its threat repository. Check Point Software’s Universe research platform compressed investigation cycles from about three weeks of manual effort to roughly one hour. Dream Group cut malware remediation guidance time by up to 95 percent.

Adversaries are Applying Similar AI Capabilities

Attackers are using AI to conduct reconnaissance, generate malware, evade detection, and launch attacks at scale, compressing what once took weeks into minutes and lowering the technical barrier for entry-level operators, the report said. The defenders’ edge, the Forum argues, lies in proprietary internal data that attackers cannot match — context the AI can use to prioritize the risks that actually matter to a specific environment.

Those reported gains help explain why chief information security officer budgets are tilting toward AI even as governance uncertainty grows. As of the Forum’s January 2026 outlook, 53 percent of cybersecurity teams reported underfunding and 55 percent reported understaffing, citing ISACA’s State of Cybersecurity 2025. AI is being positioned as the force multiplier that closes the operational gap.

“Attackers are moving faster and at greater scale than ever before. This report is a call to action for organizations to match that pace, with AI as a force multiplier for cyber defence,” said Laurent Gobbi, partner and global head of cyber and tech risk at KPMG.

What It Means for IG and eDiscovery

Adoption is uneven. Larger enterprises with greater technical maturity report higher AI-in-security usage, while small and medium businesses, governments, and non-governmental organizations lag because of financial constraints, skills gaps, and data immaturity, the report said. That split has direct consequences for mid-market law firms, regional managed service providers, and government cyber units that feed into legal-discovery and information-governance workflows. Firms that build AI-augmented incident response into their service catalog will compete differently for breach work; those that do not will face client and request-for-proposal pressure as enterprises raise the bar for managed cyber services.

For information governance and eDiscovery professionals, the deployment patterns documented in the paper map directly onto compliance obligations. ING said its machine learning data leakage prevention pipeline has processed 5 million alerts and lifted analyst precision by 20 percent across over 60,000 employees — throughput that maps to the General Data Protection Regulation, the EU’s Digital Operational Resilience Act and the U.S. Securities and Exchange Commission’s Item 1.05 four-business-day breach disclosure window without expanding headcount. Cybervergent’s agentic AI monitors source code exfiltration and “shadow AI” leaks of proprietary content into public model training sets, a vector that touches trade-secret protection and litigation-hold integrity. Across multiple case studies, AI tools generate audit-ready documentation, traceable evidence trails and standardized reporting — outputs that information governance and legal-operations teams need for chain-of-custody integrity in regulatory investigations.

Reliance Risk and the Agentic AI Horizon

The report is candid about reliance risk. “Heavy reliance on AI can undermine cyber resilience,” the authors said, recommending that security teams combine AI with human judgment, simulate AI failures, and design fail-safes that keep operations functional during AI outages. Talent gaps remain a structural drag: 54 percent of organizations identify a shortage of skilled talent as the primary barrier to AI adoption, with 76 percent of cybersecurity professionals reporting exhaustion in 2025, citing Sophos research.

About 88 percent of enterprises are actively investing in AI agents, the report said, citing KPMG’s Global Tech Report 2026, and Gartner forecasts that by 2028, about 15 percent of day-to-day work decisions will be made autonomously by AI agents. The Forum sketches a four-level autonomy spectrum — from AI that summarizes alerts under full human oversight to “human-out-of-the-loop” agents that autonomously coordinate distributed denial-of-service mitigation, with supervisor agents validating actions against security policy. The choice between levels, the report said, hinges on the reversibility and risk of the action: high autonomy for low-stakes reversible decisions, human-in-the-loop for actions with lasting consequences.

That governance bar is where information governance, privacy, and legal operations teams enter the picture directly. Agentic AI introduces an expanded attack surface, the potential for unintended cascading behaviors across multi-agent environments, and governance gaps where agents are deployed without approval, the authors said. The Forum points to its companion paper, “AI Agents in Action: Foundations for Evaluation and Governance,” as a controls reference.

The Takeaway

For cyber, information governance, and eDiscovery leaders, the next steps are concrete. Build a clear AI strategy, validate use cases through structured pilots with go/no-go criteria, and choose a build, buy, or hybrid model based on whether the capability is a strategic differentiator or a commodity utility. Scale only what demonstrates measurable benefit, and ensure the governance perimeter — including human-in-the-loop checkpoints — keeps pace with the autonomy granted to the system.

How will your organization decide where AI takes the wheel — and where a human stays in the loop?

News sources


Assisted by GAI and LLM Technologies

Source: HaystackID published with permission from ComplexDiscovery OÜ

Sign up for our Newsletter

Stay up to date with the latest updates from Newslines by HaystackID.

Email
Success! You are now signed up for our newsletter.
There has been some error while submitting the form. Please verify all form fields again.