The Audit AI Arms Race: Who Keeps Watch While the Robots Work?
- Yiwang Lim
- Jun 14, 2025
- 3 min read
Updated: Apr 28

The Big Four are deploying generative and agentic AI into live audit workflows in 2026, with EY and KPMG both updating core platforms this month
The UK's FRC became the first audit regulator globally to publish guidance on gen/agentic AI in audit, in late March 2026 — but guidance is not rules, and the codification is still nascent
The PCAOB, the US equivalent, has had its 2026 budget cut 9.4%, enforcement has collapsed, and there is live legislative risk of abolition — at precisely the moment oversight matters most
What happened
In April 2026, the Financial Times reported that EY is rolling out a revamped version of its Canvas audit platform featuring AI-powered risk assessment, real-time accounting guidance prompts, and automated work-paper pre-population. KPMG simultaneously disclosed it is piloting "orchestration" AI agents on its Clara platform — systems designed to co-ordinate other AI tools so human auditors can concentrate on judgement-intensive work. The timing is not a coincidence: both announcements follow the FRC publishing, on 27 March 2026, the world's first regulatory guidance specifically covering generative and agentic AI in audit engagements.
Context & data
The AI-in-accounting market was valued at $6.68 billion in 2025, with growth running at approximately 70% year-on-year
The Big Four's stated investment commitments include Deloitte targeting $3 billion by 2030, KPMG $5 billion, and PwC $1.5 billion
The FRC's March 2026 guidance is the first published by any audit regulator globally covering generative and agentic AI, and is the FRC's second piece of AI-in-audit guidance overall. Crucially, it does not introduce new regulatory requirements — it codifies good practice and provides a framework for future rulemaking
The SEC approved a $362 million budget for the PCAOB for 2026, a 9.4% reduction from the prior year, alongside a 52% cut to the chair's salary and an 18.4% reduction in the accounting support fee levied on public companies
PCAOB and SEC enforcement actions against auditors fell 33% in 2025 to 39 actions, with monetary sanctions declining 66% to $17.9 million; the PCAOB's 2026 budget also reflects a 15% reduction in funding for its enforcement division compared to 2025.
My take
From a PE lens, the investable angle here is clear enough: audit technology is compressing the labour-intensity of a highly recurring, contractually sticky professional services workflow. Full-population testing replacing random sampling, AI pre-filling work papers, and agentic orchestration eating the most commoditised junior tasks all point in the same direction — labour costs come down, throughput per partner goes up, and gross margins eventually expand. PwC cut 5,600 staff in the first half of 2025 and pushed its total workforce below 365,000, explicitly linking the reduction to AI-driven productivity rather than demand weakness. That is not anecdote; it is a structural margin story playing out in real time. The vendors supplying the underlying platforms — whether proprietary Big Four builds or third-party audit-tech providers — sit in an increasingly high-switching-cost position, which is exactly what you want to own.
The regulatory dimension is where I get more cautious. The FRC's principles-based framework is deliberately light-touch, which buys firms room to innovate but leaves the question of what constitutes adequate AI oversight largely unanswered in practice. The US situation is more troubling. The House Financial Services Committee voted in April 2025 to advance a bill that would abolish the PCAOB entirely, with its responsibilities transferred to the SEC. That provision was ultimately stripped out, but the signal is clear: the watchdog is weakened precisely as the technology it needs to understand is accelerating fastest. For investors in companies whose accounts are audited by AI-assisted processes, the risk isn't that AI makes audits worse — the evidence suggests the opposite. The risk is that a defanged regulator won't catch it when something goes wrong, and audit failures at scale are expensive, as Enron and Wirecard remind us.
Risks & watch-list
Regulatory vacuum in the US. With PCAOB enforcement already down two-thirds by sanctions value and its budget shrinking, the gap between what AI can do in audit and what a regulator can meaningfully scrutinise is widening fast. Any high-profile AI-assisted audit failure lands into this gap.
Hallucination and accountability diffusion. The FRC's three-risk framework (wrong output, misinterpreted output, insufficient work) is sensible but the mitigations remain qualitative. Until standards specify what "sufficient checking" looks like for an LLM-generated work paper, firms face audit-on-audit risk.
The EU AI Act's requirements around data, transparency, and risk management create divergent compliance obligations for firms operating globally — adding opex drag for multinationals and potential competitive asymmetry between US and European audit teams.
Talent displacement without quality replacement. If the Big Four are cutting graduate intakes while agentic AI is still error-prone on edge cases, there is a real risk that the human judgement layer thins out before the AI is reliable enough to compensate. That is a slow-build quality risk, not a near-term earnings one.



Comments