Designing Trustworthy Public-Private Verification Frameworks for AI Governance
Articles
•
Events
•
Analysis from a Full-Day Workshop on Independent Verification Organizations (IVOs)

Key Summary
In February 2026, Fathom convened a full-day workshop at the International Association of Safe and Ethical AI (IASEAI) Conference in Paris to advance the practical development of Independent Verification Organizations (IVOs), an outcomes and market-based approach to AI governance.
The morning workshop opened with a keynote from Dr. Gillian Hadfield, a key architect of regulatory markets for artificial intelligence governance, followed by two expert panels discussing the state of technical AI assurance and market incentives to grow the ecosystem. All conversations explored a central question:
How do we build a governance infrastructure for AI that actually works—that keeps pace with the technology, produces meaningful safety outcomes, and scales beyond what governments can do alone?
The full report includes findings from a tabletop simulation, phase-by-phase analysis, and four workstreams for bringing IVOs from framework to reality.
“The tools available today are the best they have ever been and the worst they will ever be. Impact should be the only layer we care about [in evaluations].”
-Dr. Dylan Hadfield-Menell, MIT
AI has disrupted traditional approaches to technology regulation. Governments lack the speed and technical expertise to adequately regulate this technology alone; further, command-and-control governance tends to focus on procedural compliance, rather than actual outcomes. Meanwhile, industry self-regulation fails to achieve desired societal outcomes, as industry is not incentivized to consistently put the public interest first. This gap in governance becomes increasingly problematic as the pace of AI capabilities–and uncertainty around their risks–surges.
The morning’s events surfaced a consensus among participants that a new approach, one centered around competitive regulatory markets of independent verifiers, is needed. Under the IVO framework, governments set outcome-based requirements around AI safety and security, and then license independent, expert-led organizations (IVOs) that verify that AI products meet those outcomes. Incentives for participation by the developers could include competitive market advantage, legal certainty and tort protection, and insurance coverage.
“Impact should be the only layer we care about [in evaluations].”
-Dr. Gemma Galdon Clavel, Eticas
Crucially, because IVOs compete to provide assurance services, the system drives a race to the top on evaluations. As multiple panelists noted, current AI evaluation practices are incomplete. Typical AI evaluations test systems on a handful of prompts, while real-world failures emerge over long interactions sometimes hundreds of turns where extended context can condition model behavior in ways that diverge significantly from how the system performed during development testing. Moreover, evaluations tend to test for the harms and problems we know exist (and that we know how to measure), rather than real-world impacts we don't fully understand. The IVO system would help catalyze the development and use of evaluations tied to the real-world outcomes that democratically accountable governments want to see around AI.
However, the path to a functioning IVO ecosystem is still being paved. The technical tools for meaningful AI assurance are advancing rapidly panelists noted that the pipeline from qualitative assessment to quantitative measurement is better than it has ever been, and noted that AI-assisted evaluation will further accelerate those capabilities. But the market, while growing, has not yet materialized at scale. As panelists observed, industry demand for independent assurance services is not yet sufficiently strong, largely due to lack of visibility. At the same time, venture capital is hesitant to invest in a market that hasn't demonstrated scalability. Realizing the potential of the evaluations ecosystem a potentially multi-hundred-billion dollar industry requires building new institutions and markets.
AI may be the most interesting governance technology ever invented.”
-Dean Ball Former White House Office of Science & Technology Policy
This challenge bringing IVOs from framework to governing reality was the focus of the afternoon session. How will IVOs actually operate?What will the interaction between regulators, developers, deployers, and verifiers look like in practice? What happens when something goes wrong? As legislation to authorize the IVO system advances in the states, and work to grow the evaluators ecosystem and bring it together with deployers gains momentum, the afternoons tabletop simulation was designed to dig into the realm of the practical to surface the tensions and design choices that will determine whether, and when, this vision can become a functioning system.
Read the full report for detailed findings on licensing requirements, modes of engagement, crisis simulation results, and next steps for the IVO ecosystem.