What the EU AI Code of Practice Gets Wrong About AI Governance
Articles
•
The Merits of Outcomes-Based Regulations
When the EU Commission published the final version of its voluntary EU AI Code of Practice in July, many policymakers and commentators in America panned the effort, labeling it as yet another continental example of burdensome regulatory overreach.
Related Insights
Research
•
Events
Designing Trustworthy Public-Private Verification Frameworks for AI Governance
Feb 24-26, 2026
•
IASEAI Annual Conference 2026
Analysis from a Full-Day Workshop on Independent Verification Organizations (IVOs)
Events
•
Research
The Ashby Workshops: 2026 Report
Feb 2-4, 2026
•
The Ashby Workshops 2026
Designing Our Collective AI Future: Insights from the leaders shaping the future of AI across sectors and disciplines
News
•
Events
Economic Times: AI safety hangs in balance as India rushes in
Feb 16-20, 2026
•
India AI Impact Summit 2026
In the past two years, new nonprofit groups have emerged to address AI risks. Organisations like Fathom, Current AI, and IASEAI work with governments and industry to create safeguards. As AI spreads, safety is becoming a core concern, with researchers and policymakers joining efforts to manage its impact.
Related Insights
Research
•
Events
Designing Trustworthy Public-Private Verification Frameworks for AI Governance
Feb 24-26, 2026
•
IASEAI Annual Conference 2026
Analysis from a Full-Day Workshop on Independent Verification Organizations (IVOs)
Events
•
Research
The Ashby Workshops: 2026 Report
Feb 2-4, 2026
•
The Ashby Workshops 2026
Designing Our Collective AI Future: Insights from the leaders shaping the future of AI across sectors and disciplines
News
•
Events
Economic Times: AI safety hangs in balance as India rushes in
Feb 16-20, 2026
•
India AI Impact Summit 2026
In the past two years, new nonprofit groups have emerged to address AI risks. Organisations like Fathom, Current AI, and IASEAI work with governments and industry to create safeguards. As AI spreads, safety is becoming a core concern, with researchers and policymakers joining efforts to manage its impact.
Related Insights
Research
•
Events
Designing Trustworthy Public-Private Verification Frameworks for AI Governance
Feb 24-26, 2026
•
IASEAI Annual Conference 2026
Analysis from a Full-Day Workshop on Independent Verification Organizations (IVOs)
Events
•
Research
The Ashby Workshops: 2026 Report
Feb 2-4, 2026
•
The Ashby Workshops 2026
Designing Our Collective AI Future: Insights from the leaders shaping the future of AI across sectors and disciplines
News
•
Events
Economic Times: AI safety hangs in balance as India rushes in
Feb 16-20, 2026
•
India AI Impact Summit 2026
In the past two years, new nonprofit groups have emerged to address AI risks. Organisations like Fathom, Current AI, and IASEAI work with governments and industry to create safeguards. As AI spreads, safety is becoming a core concern, with researchers and policymakers joining efforts to manage its impact.
Independent.
Nonpartisan.
Nonprofit.
Fathom is a 501(c)(3) organization funded by philanthropists. We do not take donations from corporations, including frontier labs and the FAANG companies, or foreign entities associated with countries of concern.
Independent.
Nonpartisan.
Nonprofit.
Fathom is a 501(c)(3) organization funded by philanthropists. We do not take donations from corporations, including frontier labs and the FAANG companies, or foreign entities associated with countries of concern.
Independent.
Nonpartisan.
Nonprofit.
Fathom is a 501(c)(3) organization funded by philanthropists. We do not take donations from corporations, including frontier labs and the FAANG companies, or foreign entities associated with countries of concern.