CEO of OpenAI Sam Altman

OpenAI isn’t like your typical company. It started as a nonprofit in 2015, based on predictions that AI would eventually match human intelligence in almost every area. This led to the idea for an organization focused on benefiting everyone, not just shareholders.

As AI evolved rapidly in the following years, OpenAI established a “capped-profit” structure in 2019. This allowed them to attract investment while still adhering to their original nonprofit goals.

Around this time, I joined OpenAI as a junior researcher. My team focused on reinforcement learning, where AI systems learn and improve through trial and error in simulated environments. We initially used this method for video games and then applied it to large language models, using human feedback to develop early versions of ChatGPT. These same techniques are still used today to train AI systems used by millions.

My team’s work, and my own, relied on OpenAI’s promise from the 2019 transition: a legal obligation to prioritize the public’s interests over investors. But this commitment is now at risk, as OpenAI considers a restructuring plan that could eliminate limits on investor profits and weaken its commitment to its charitable mission.

It’s easy to be cynical and assume that the shift from a mission-driven nonprofit to a large tech company was inevitable once substantial money was involved. However, this excuses the company for going back on its promises to the public. Furthermore, OpenAI’s restructuring plans are facing scrutiny from government officials, giving the public a reason and the right to defend its interests.

I still have stock in OpenAI, but even with this financial stake, I believe the public’s interests must be protected. Before I left in 2023, product releases were already happening quickly, and this pace seems to have only increased. Employees have even expressed concerns about the potential consequences. This hurried approach has led to releases being flagged for “fueling anger, urging impulsive actions, or reinforcing negative emotions.” At the same time, decision-makers have financial motivations that don’t fully take these negative effects into account.

The responsibility for ensuring OpenAI sticks to its charitable mission falls to its nonprofit board of directors. Unfortunately, many argue that the current board lacks the necessary independence and resources to effectively do its job. Since 2019, OpenAI’s commercial operations have grown from nothing to billions in annual revenue. However, the nonprofit still lacks its own independent staff, and its board members are too busy with their own companies or academic work to provide adequate oversight. To make matters worse, the proposed restructuring threatens to weaken the board’s authority when it needs to be strengthened.

But there’s a better path forward. Before any restructuring happens, the board should immediately hire a nonprofit CEO to build an independent team that is free from financial conflicts and accountable only to the board. This team would support the board’s oversight duties and could eventually take on several important functions.

First, the nonprofit could evaluate executive performance based on the organization’s charitable mission. The board could use these reviews to set executive compensation, helping to align incentives at the top of the company.

Second, the nonprofit could provide the board with independent expertise on safety and security. It could review internal safety tests conducted under the company’s “preparedness” framework, as well as external safety tests and audits of the company’s security measures. The board, supported by summaries of these reviews, could then approve frontier deployments.

Third, the nonprofit could improve transparency. By maintaining its own communication channel with the public, it could keep people informed about the company’s safety and security practices, significant changes to internal policies or model specifications, and new features that raise public concerns. It could also conduct and publish its own analyses of safety incidents and manage the internal whistleblower hotline.

Finally, the nonprofit could oversee activities where profit motives are likely to conflict with the public interest, such as the company’s global affairs work. It could also use its substantial financial resources (derived from its majority stake in the company) to award grants that support both beneficial AI applications and risk mitigation efforts.

As AI continues to advance rapidly and with no sign of significant Federal regulation on the horizon, an empowered nonprofit board is more crucial than ever. The nonprofit’s activities could not only oversee OpenAI but also serve as a model for others. For example, transparency standards and third-party reviews piloted at OpenAI could be the basis for future regulations.

OpenAI’s upcoming decisions will shape the company’s direction for years to come. Instead of abandoning its commitment to the public interest, it could step back and reaffirm these commitments by strengthening the nonprofit board’s ability to oversee the company.

OpenAI’s nonprofit soul can still be saved, but it may require the public to voice its concerns as the organization’s rightful beneficiary.

“`