You are currently viewing OpenAI does about-face, mulls change to AGI provision for big tech dominance.

OpenAI does about-face, mulls change to AGI provision for big tech dominance.

  • Post author:
  • Post category:News

OpenAI is reportedly considering removing a clause in its agreement with Microsoft that would limit the tech giant’s control over the company if it achieves artificial general intelligence (AGI).

When Microsoft invested in OpenAI in 2019, the two parties agreed that should OpenAI succeed in creating AGI, Microsoft’s access to OpenAI’s technology, data, and model weights would be revoked. This provision was included to reflect OpenAI’s original vision of developing AGI in a way that wasn’t dominated by big tech corporations.

However, the company’s direction has shifted over the past few years. OpenAI transitioned from being a non-profit to a for-profit entity, and CEO Sam Altman, who initially didn’t take a stake in the business, is now poised to own as much as 7% of a company valued at $150 billion. Microsoft has invested about $13 billion in OpenAI, primarily in the form of cloud credits, which have been used to train and run OpenAI’s models.

As OpenAI’s computational needs continue to grow, so do its financial requirements. The company has expressed the need for 5GW data centers to train its future models, a project that could cost upwards of $100 billion. Raising such funds, whether from Microsoft or other tech giants, could be challenging with the AGI safety clause still in place, prompting OpenAI to consider removing it, according to the Financial Times.

Altman acknowledged the shift in the company’s priorities during a recent conference. “When we started, we had no idea we were going to be a product company or that the capital we needed would turn out to be so huge,” he said. “If we knew those things, we would have picked a different structure.”

He also emphasized the company’s evolving perspective on AGI: “We’ve also said that our intention is to treat AGI as a mile marker along the way. We’ve left ourselves some flexibility because we don’t know what will happen.”

Altman downplayed the significance of achieving AGI, a milestone once viewed by OpenAI as an existential threat to humanity. “My guess is we will hit AGI sooner than most people in the world think and it will matter much less,” he said. “And a lot of the safety concerns that we and others expressed actually don’t come at the AGI moment. AGI can get built, the world mostly goes on in mostly the same way, things grow faster, but then there is a long continuation from what we call AGI to what we call superintelligence.”

In recent months, OpenAI has seen high-profile departures, including AI governance researcher Richard Ngo, who left over concerns that the company was straying from its original mission of ensuring AGI’s safe development. Other departures include co-founder Ilya Sutskever, who left to start his own company, co-founder John Schulman, who joined Anthropic, and former safety leader Jan Leike, among others.