Approach

Fathom is an independent nonprofit whose mission is to find, build, and scale solutions that foster public trust in AI, while unleashing the power of American innovation.

Approach

Fathom is an independent nonprofit whose mission is to find, build, and scale solutions that foster public trust in AI, while unleashing the power of American innovation.

Approach

Fathom is an independent nonprofit whose mission is to find, build, and scale solutions that foster public trust in AI, while unleashing the power of American innovation.

Who We Are

Fathom is a new type of organization that exists to build the global architecture for the AI century: to help societies in America and around the world navigate the transition to a world where AI is fully integrated into how we live, work, and govern ourselves.

We do this by developing and scaling governance, technical, and policy solutions that increase AI trust, accountability, and beneficial adoption. And we work to expand the table where decisions about AI are made: ensuring that those most affected by this technology have a voice in shaping it. While we conduct and support research, our focus is on action.

The new bargains we help forge between governments and citizens, between markets and workers, between innovation and security, will be how we get to a future that benefits everyone, not just those building or deploying the technology. That means developing the policies, institutions, and frameworks that can rewrite the social contract for the AI age, just as previous generations built labor protections, social insurance, and public education to stabilize earlier technological transitions.

Our Work

Fathom’s first major initiative targets AI governance – the most urgent and underdeveloped piece of the puzzle.

AI has broken the traditional approach to technology regulation. The pace of development far outpaces the speed of oversight, and the complexity of the technology exceeds the technical capacity of most government regulators. Meanwhile, the private sector is not sufficiently incentivized to fully align its interests with those of the public. In short, neither government nor industry, acting alone or together, is equipped to meet this moment.

In our first year, Fathom has begun building and scaling a system of Independent Verification Organizations, or IVOS: a practical, adaptable model of AI governance designed to make AI products safe and more trustworthy. Under this framework, governments set outcomes around safety, privacy, security, accuracy, and goals in the public interest. A marketplace of independent, accredited evaluators then verifies whether AI products meet those goals. Products that pass earn a competitive market advantage and legal certainty: a rebuttable presumption of compliance or evidentiary support in litigation.

This approach can address a wide range of harms: self-harm and suicide risk, data privacy and security, content safety, accessibility, regulatory compliance, controllability, interpretability, and more. It can also help prevent the tort nightmare that is building as AI products cause harm and courts struggle to assign liability for harms. But beyond mitigating specific risks, independent verification creates something more fundamental: a foundation of safety that enables trust – and trust enables adoption.

Fathom’s proposed model is gaining traction. Legislation has been introduced in California, Ohio, and Virginia, with additional states, red and blue, preparing to follow. We are working with partners in the UK, EU, and other jurisdictions to scale the model internationally. And while legislation is necessary to fully catalyze this market, we are simultaneously working with evaluation providers and demand-side organizations to build the infrastructure and capacity this system requires.

Where We Go From Here

Governance is foundational – but it is only the beginning.

As Fathom scales our governance work, we are also studying longer-horizon challenges. What does the advent of highly capable AI actually mean for labor markets, for education, and for democratic participation? Where are the gaps in current thinking? What interventions could make a difference, and at what scale? What are the principles that should guide us as we pursue policy and technical interventions? What trade-offs are we comfortable, as a society, with making?

We study and operationalize principles for a good AI transition, and lead research and action workstreams on the priorities and concerns of Americans and people around the world when it comes to AI, including the future of work and education, national security, and civic and institutional trust.

Who We Are

Fathom is a new type of organization that exists to build the global architecture for the AI century: to help societies in America and around the world navigate the transition to a world where AI is fully integrated into how we live, work, and govern ourselves.

We do this by developing and scaling governance, technical, and policy solutions that increase AI trust, accountability, and beneficial adoption. And we work to expand the table where decisions about AI are made: ensuring that those most affected by this technology have a voice in shaping it. While we conduct and support research, our focus is on action.

The new bargains we help forge between governments and citizens, between markets and workers, between innovation and security, will be how we get to a future that benefits everyone, not just those building or deploying the technology. That means developing the policies, institutions, and frameworks that can rewrite the social contract for the AI age, just as previous generations built labor protections, social insurance, and public education to stabilize earlier technological transitions.

Our Work

Fathom’s first major initiative targets AI governance – the most urgent and underdeveloped piece of the puzzle.

AI has broken the traditional approach to technology regulation. The pace of development far outpaces the speed of oversight, and the complexity of the technology exceeds the technical capacity of most government regulators. Meanwhile, the private sector is not sufficiently incentivized to fully align its interests with those of the public. In short, neither government nor industry, acting alone or together, is equipped to meet this moment.

In our first year, Fathom has begun building and scaling a system of Independent Verification Organizations, or IVOS: a practical, adaptable model of AI governance designed to make AI products safe and more trustworthy. Under this framework, governments set outcomes around safety, privacy, security, accuracy, and goals in the public interest. A marketplace of independent, accredited evaluators then verifies whether AI products meet those goals. Products that pass earn a competitive market advantage and legal certainty: a rebuttable presumption of compliance or evidentiary support in litigation.

This approach can address a wide range of harms: self-harm and suicide risk, data privacy and security, content safety, accessibility, regulatory compliance, controllability, interpretability, and more. It can also help prevent the tort nightmare that is building as AI products cause harm and courts struggle to assign liability for harms. But beyond mitigating specific risks, independent verification creates something more fundamental: a foundation of safety that enables trust – and trust enables adoption.

Fathom’s proposed model is gaining traction. Legislation has been introduced in California, Ohio, and Virginia, with additional states, red and blue, preparing to follow. We are working with partners in the UK, EU, and other jurisdictions to scale the model internationally. And while legislation is necessary to fully catalyze this market, we are simultaneously working with evaluation providers and demand-side organizations to build the infrastructure and capacity this system requires.

Where We Go From Here

Governance is foundational – but it is only the beginning.

As Fathom scales our governance work, we are also studying longer-horizon challenges. What does the advent of highly capable AI actually mean for labor markets, for education, and for democratic participation? Where are the gaps in current thinking? What interventions could make a difference, and at what scale? What are the principles that should guide us as we pursue policy and technical interventions? What trade-offs are we comfortable, as a society, with making?

We study and operationalize principles for a good AI transition, and lead research and action workstreams on the priorities and concerns of Americans and people around the world when it comes to AI, including the future of work and education, national security, and civic and institutional trust.

Who We Are

Fathom is a new type of organization that exists to build the global architecture for the AI century: to help societies in America and around the world navigate the transition to a world where AI is fully integrated into how we live, work, and govern ourselves.

We do this by developing and scaling governance, technical, and policy solutions that increase AI trust, accountability, and beneficial adoption. And we work to expand the table where decisions about AI are made: ensuring that those most affected by this technology have a voice in shaping it. While we conduct and support research, our focus is on action.

The new bargains we help forge between governments and citizens, between markets and workers, between innovation and security, will be how we get to a future that benefits everyone, not just those building or deploying the technology. That means developing the policies, institutions, and frameworks that can rewrite the social contract for the AI age, just as previous generations built labor protections, social insurance, and public education to stabilize earlier technological transitions.

Our Work

Fathom’s first major initiative targets AI governance – the most urgent and underdeveloped piece of the puzzle.

AI has broken the traditional approach to technology regulation. The pace of development far outpaces the speed of oversight, and the complexity of the technology exceeds the technical capacity of most government regulators. Meanwhile, the private sector is not sufficiently incentivized to fully align its interests with those of the public. In short, neither government nor industry, acting alone or together, is equipped to meet this moment.

In our first year, Fathom has begun building and scaling a system of Independent Verification Organizations, or IVOS: a practical, adaptable model of AI governance designed to make AI products safe and more trustworthy. Under this framework, governments set outcomes around safety, privacy, security, accuracy, and goals in the public interest. A marketplace of independent, accredited evaluators then verifies whether AI products meet those goals. Products that pass earn a competitive market advantage and legal certainty: a rebuttable presumption of compliance or evidentiary support in litigation.

This approach can address a wide range of harms: self-harm and suicide risk, data privacy and security, content safety, accessibility, regulatory compliance, controllability, interpretability, and more. It can also help prevent the tort nightmare that is building as AI products cause harm and courts struggle to assign liability for harms. But beyond mitigating specific risks, independent verification creates something more fundamental: a foundation of safety that enables trust – and trust enables adoption.

Fathom’s proposed model is gaining traction. Legislation has been introduced in California, Ohio, and Virginia, with additional states, red and blue, preparing to follow. We are working with partners in the UK, EU, and other jurisdictions to scale the model internationally. And while legislation is necessary to fully catalyze this market, we are simultaneously working with evaluation providers and demand-side organizations to build the infrastructure and capacity this system requires.

Where We Go From Here

Governance is foundational – but it is only the beginning.

As Fathom scales our governance work, we are also studying longer-horizon challenges. What does the advent of highly capable AI actually mean for labor markets, for education, and for democratic participation? Where are the gaps in current thinking? What interventions could make a difference, and at what scale? What are the principles that should guide us as we pursue policy and technical interventions? What trade-offs are we comfortable, as a society, with making?

We study and operationalize principles for a good AI transition, and lead research and action workstreams on the priorities and concerns of Americans and people around the world when it comes to AI, including the future of work and education, national security, and civic and institutional trust.

Independent.
Nonpartisan.
Nonprofit.

Fathom is a 501(c)(3) organization funded by philanthropists. We do not take donations from corporations, including frontier labs and the FAANG companies, or foreign entities associated with countries of concern.

Independent.
Nonpartisan.
Nonprofit.

Fathom is a 501(c)(3) organization funded by philanthropists. We do not take donations from corporations, including frontier labs and the FAANG companies, or foreign entities associated with countries of concern.

Independent.
Nonpartisan.
Nonprofit.

Fathom is a 501(c)(3) organization funded by philanthropists. We do not take donations from corporations, including frontier labs and the FAANG companies, or foreign entities associated with countries of concern.