Congressional Briefing: Innovation and the AI Assurance Marketplace

Events

Rayburn House Office Building, Washington, D.C.

Overview

AI raises fundamentally new governance questions. Unlike static, deterministic systems, AI models change with updates and fine-tuning, and their outputs vary across prompts, contexts, and real-world deployment, requiring oversight that is continuous and adaptive, rather than fixed, one-time compliance checks.

Today’s oversight landscape is still emerging. Across jurisdictions, awareness of independent AI assurance remains limited, and the ecosystem needed to support it at scale is still developing.  This challenge is compounded by a significant resource gap: the AI assurance field remains dramatically under-invested relative to analogous industries.  

Effective governance will require both the right incentives and the institutional infrastructure to support AI assurance as a core mechanism.  Assurance providers would benefit from operating in a marketplace that ties their work to real-world, democratically-rooted outcomes, rewards rigor and continuous improvements in assurance methods and practices, and connects them to AI deployers, developers, and other customers who rely on trusted assessments.

We cannot wait for failure. AI policy has the opportunity to build assurance capacity before a comparable crisis forces reactive measures.

The Briefing

In December 2025, an AI assurance coalition and working group, organized by Fathom, brought together leading voices in AI safety, governance, and policy for a Congressional briefing on the future of independent AI evaluation. The session brought together Hill staff from multiple Congressional committees, federal agencies, and private sector leaders to explore how an AI assurance marketplace can support both innovation and public trust.

The briefing demonstrated strong bipartisan interest in practical, market-driven approaches to AI governance—and reinforced that the ecosystem the AI assurance coalition and Fathom is helping to shape not only exists, but has real momentum.

“Independent evaluation is perhaps the most efficient intervention we’ve identified. Companies move very quickly, and they respond very quickly to incentives. If you can clearly identify what it means to be safe, they’re very likely to be able to achieve that.”

— Conrad Stosz, Head of Governance, Transluce

Key Takeaways

  • Safety and innovation are not a tradeoff. Panelists emphasized that independent evaluation strengthens AI systems and improves their reliability, clearing the way for faster development and broader adoption without sacrificing trust or security. 

  • Independent assessors already exist. A growing ecosystem of organizations is already conducting real-world evaluations of deployed AI systems, demonstrating that external assessment is both feasible and underway.

  • The AI assurance ecosystem is dramatically under-resourced. As Brundage noted, the PCAOB alone employs approximately 800 staff to oversee financial auditing—likely more than the entire AI assurance ecosystem today.

  • Open methods—not open models—offer a practical path to accountability. Transparency in how AI systems are evaluated allows for oversight without requiring companies to release sensitive models or capabilities.

  • Incentive design remains the central challenge. Evaluations are widely seen as the best available tool for AI governance, but existing incentives discourage participation, transparency, and investment. 

  • The long-term goal is predictable, shared evaluation standards. Developers should know in advance the set of trusted evaluations their systems face, and deployers should know which assurance providers they can trust to tell them what is safe, reliable, and secure.

“The leading companies often score the strongest on both capability benchmarks and safety benchmarks. Trust unlocks adoption.”

— Miles Brundage, AI Researcher & Founder, AVERI

What This Accomplished

The briefing strengthened Congress’s understanding of practical AI assurance by:

  • Elevating independent AI assurance as a credible, policy option for Congress, demonstrating that it is a practical, market-driven solution rather than an abstract concept. 

  • Demonstrating bipartisan appetite for approaches that align safety, innovation, and enterprise leadership — rather than defaulting to heavy-handed mandates.

  • Bringing enterprises, assurance providers, civil society, and policymakers into direct dialogue, building early alignment before policy positions harden.

  • Establishing a clear opportunity for industry leaders to help shape emerging governance frameworks rather than reacting after they are set.

“We are seeing more demand now than a year ago. The industry has realized: no one’s going to come tell us what to do, but we need to do something. This is a risk, and we don’t do well with risk.”

— Dr. Gemma Galdon Clavell, Founder & CEO, Eticas

The Path Forward

The December Hill briefing was one step in Fathom’s broader effort to build the infrastructure for trustworthy AI in key policy corridors. We’re convening enterprises, nonprofits, and assurance providers who share our belief that independent AI assurance can transform safety from a compliance burden into a competitive advantage.

If you’re building, deploying, or evaluating AI systems—and want to learn more about how independent assurance can strengthen safety and trust in practice—we’d like to connect.

Who Participated

Panelists:

Miles Brundage
AI Researcher & Founder, AVERI
(formerly Head of Policy Research, OpenAI)

Dr. Gemma Galdon Clavell
Founder & CEO, Eticas

Conrad Stosz
Head of Governance, Transluce
(formerly Head of Policy, U.S. AI Safety Institute at NIST)

Moderated by Bri Treece
Co-Founder & President, Fathom

Congressional Committees Represented:

House Judiciary; House Science, Space & Technology;
House Select Committee on China; House Veterans Affairs;
House Financial Services; Senate HELP;
Senate Commerce, Science & Transportation

Agencies & Organizations

Who Participated

Panelists:

Miles Brundage
AI Researcher & Founder, AVERI
(formerly Head of Policy Research, OpenAI)

Dr. Gemma Galdon Clavell
Founder & CEO, Eticas

Conrad Stosz
Head of Governance, Transluce
(formerly Head of Policy, U.S. AI Safety Institute at NIST)

Moderated by Bri Treece
Co-Founder & President, Fathom

Congressional Committees Represented:

House Judiciary; House Science, Space & Technology;
House Select Committee on China; House Veterans Affairs;
House Financial Services; Senate HELP;
Senate Commerce, Science & Transportation

Agencies & Organizations

Who Participated

Panelists:

Miles Brundage
AI Researcher & Founder, AVERI
(formerly Head of Policy Research, OpenAI)

Dr. Gemma Galdon Clavell
Founder & CEO, Eticas

Conrad Stosz
Head of Governance, Transluce
(formerly Head of Policy, U.S. AI Safety Institute at NIST)

Moderated by Bri Treece
Co-Founder & President, Fathom

Congressional Committees Represented:

House Judiciary; House Science, Space & Technology;
House Select Committee on China; House Veterans Affairs;
House Financial Services; Senate HELP;
Senate Commerce, Science & Transportation

Agencies & Organizations

Learn more: fathom.org

Get in touch: info@fathom.org

Fathom is a 501(c)(3) nonprofit building the global architecture for the AI century.



Independent.
Nonpartisan.
Nonprofit.

Fathom is a 501(c)(3) organization funded by philanthropists. We do not take donations from corporations, including frontier labs and the FAANG companies, or foreign entities associated with countries of concern.

Independent.
Nonpartisan.
Nonprofit.

Fathom is a 501(c)(3) organization funded by philanthropists. We do not take donations from corporations, including frontier labs and the FAANG companies, or foreign entities associated with countries of concern.

Independent.
Nonpartisan.
Nonprofit.

Fathom is a 501(c)(3) organization funded by philanthropists. We do not take donations from corporations, including frontier labs and the FAANG companies, or foreign entities associated with countries of concern.