The Governance Gap in Companion Chatbots
Articles
•
How do you put guardrails on a system that isn’t failing—but working exactly as intended?
In late March, OpenAI indefinitely shelved its planned “adult mode” for ChatGPT. Sam Altman had pitched the feature in October as “erotica for verified adults.” By the time it was pulled, internal advisers had reportedly warned the company could be building a “sexy suicide coach,” testing had failed to filter out illegal content categories, and age-verification systems were showing error rates above ten percent—which at ChatGPT’s scale, would expose millions of minors.
OpenAI framed the decision as a business decision, part of the company’s strategic refocus on enterprise and coding. But there is nothing—no regulatory infrastructure or policy—that would prevent OpenAI from revisiting the decision when priorities shift, or that would prevent another company from launching something similar in the future. This is a structural governance problem that points to a harder question than the one most safety conversations focus on.