An 18-month New Yorker investigation has uncovered significant contradictions between OpenAI CEO Sam Altman's public advocacy for AI regulation and his private lobbying efforts to weaken safety measures. The investigation reveals Altman simultaneously called for stricter AI oversight while working behind closed doors to reduce regulatory constraints on his company.
Key Takeaways
- 18-month investigation documents Altman's contradictory public-private positions on AI safety regulation
- OpenAI pursued $7 billion in funding from Gulf states while publicly supporting democratic AI governance
- Corporate governance probe following November 2023 board crisis remains ongoing with unresolved questions
The Public-Private Contradiction
The investigation documents a pattern of Altman presenting himself as a responsible AI advocate in public forums while privately working to undermine the very regulations he publicly endorsed. According to sources familiar with OpenAI's lobbying activities, Altman testified before Congress in May 2023 calling for AI licensing requirements, then immediately began lobbying against specific implementation measures. This dual approach allowed OpenAI to maintain its reputation as a safety-conscious leader while preserving maximum operational flexibility.
Internal documents reviewed by The New Yorker show Altman's team actively worked to water down proposed AI safety standards that would have required more rigorous testing of large language models. The lobbying efforts specifically targeted provisions that would mandate third-party audits of AI systems before public release, a requirement Altman had publicly supported just months earlier.
Gulf Funding and Democratic Values
Perhaps most striking is OpenAI's pursuit of funding from Gulf sovereign wealth funds while Altman publicly championed AI development aligned with democratic principles. The investigation reveals OpenAI sought $7 billion in investment from Middle Eastern sources, including discussions with Saudi Arabia's Public Investment Fund and Abu Dhabi's sovereign wealth entities. These conversations occurred as Altman testified about the importance of keeping AI development within democratic frameworks and preventing authoritarian influence over AI systems.
The apparent contradiction extends beyond mere funding sources. Sources close to the negotiations indicate OpenAI was prepared to offer Gulf investors significant input into the company's AI safety policies and deployment strategies. This would have given authoritarian governments indirect influence over systems Altman described as crucial to preserving democratic values and human rights globally.
"Sam has become incredibly sophisticated at saying exactly what different audiences want to hear, often within the same week." — Former OpenAI executive familiar with the lobbying strategy
The Gulf funding pursuit ultimately failed to materialize, though not due to concerns about democratic governance. Instead, investors pulled back after OpenAI's board crisis in November 2023 raised questions about the company's internal stability and Altman's leadership style.
The Mysterious Post-Firing Investigation
The investigation also sheds new light on the circumstances surrounding Altman's brief firing and reinstatement in November 2023. While the board cited concerns about Altman's truthfulness, The New Yorker reveals these concerns extended far beyond communication style. The board had commissioned an external investigation into Altman's business practices, including potential conflicts of interest related to his outside investments and his management of OpenAI's partnership negotiations.
This external probe, conducted by WilmerHale, examined whether Altman had properly disclosed his financial interests in companies that could benefit from OpenAI's technology development. The investigation also looked into his role in structuring deals that might have personally benefited him while potentially disadvantaging OpenAI shareholders. Key findings from this investigation remain sealed, contributing to ongoing uncertainty about OpenAI's corporate governance.
The board's decision to fire Altman appears to have been triggered not by a single incident, but by a pattern of behavior that members felt undermined their fiduciary responsibilities. However, the rapid employee revolt that led to Altman's reinstatement meant these concerns were never fully addressed or resolved publicly.
Regulatory and Industry Implications
The revelations come at a critical moment for AI regulation globally. European regulators are implementing the EU AI Act, while US lawmakers debate comprehensive AI oversight frameworks. Altman's contradictory positions complicate efforts to develop coherent regulatory approaches, as policymakers struggle to distinguish between genuine industry input and strategic misdirection.
Industry analysts note that OpenAI's approach reflects broader tensions within the AI sector between the need for safety oversight and competitive pressures. As we explored in our recent analysis of OpenAI's IPO challenges, the company faces mounting pressure to demonstrate sustainable business models while maintaining its safety-first public image.
The investigation also raises questions about the effectiveness of current corporate governance structures in the AI industry. OpenAI's unique structure, with its nonprofit board overseeing a for-profit subsidiary, was designed to prioritize safety over profits. However, the events surrounding Altman's firing suggest this structure may be inadequate for managing the complex conflicts inherent in leading AI development.
What Comes Next
The New Yorker investigation is likely to intensify regulatory scrutiny of OpenAI's business practices and internal governance. Congressional committees are reportedly considering new hearings focused specifically on AI industry lobbying practices and potential conflicts between public statements and private actions. The Federal Trade Commission has also indicated interest in examining whether AI companies' public safety commitments constitute binding agreements with consumers.
For OpenAI, the immediate challenge is rebuilding trust with stakeholders who feel misled by the gap between Altman's public positions and private actions. The company's planned IPO, already complicated by questions about its corporate structure, faces additional uncertainty as investors weigh reputational risks against growth potential.
The broader AI industry must also grapple with the precedent set by OpenAI's approach. As other companies develop their own regulatory strategies, they face pressure to avoid the appearance of duplicity while still protecting their competitive interests. The investigation serves as a warning that the gap between public advocacy and private lobbying may no longer remain hidden from public scrutiny.