OpenAI's mad weekend

OpenAI's mad weekend

Sam Altman was fired by the board of OpenAI Friday – through a Google Meet call. Yep, those things happen. Before I delve into all the crazy speculations about “what? why?” – and some of them are really out of this world – I have to say that as of my writing (Sunday 19th morning CET) it looks like the board is now in discussion with Sam Altman to rehire him.

Will he accept or not?

It will be the cliffhanger … but what happened happened and cannot be undone, unless we can rewind the simulation we live in, but I don’t have that command 😊

OpenAI Structure and the Need for Fresh Capital

OpenAI is not a regular company. It’s a Non-Profit (not supposed to make money) but when they realized the amount of capital needed to do their research (training large language models requires lots of expensive GPU time, it can actually cost billions to train a model, and it requires expensive, in-demand specific hardware), they built a for-profit embedded in the non-profit, which is where Microsoft poured their billions.

Reportedly, Sam Altman is in talks with deep-pocketed individuals and structures (Masayoshi Son, Middle Eastern sovereign funds), discussing, amongst other topics, funding for building their own chips to be less reliant on Nvidia GPUs.

Altman has zero shares in OpenAI, and he always stated that the board should be independent and that the stuff they were working on (namely Artificial General Intelligence, more on this later) should be insulated from profit pressures, and could not be handled by a publicly listed company because of the inevitable greed of shareholders.

So actually, Microsoft is pouring billions into OpenAI but cannot say anything about the company’s governance.

The board is 6 people – 3 from OpenAI, all cofounders, Altman, Brockman (chairman), and Sutskever, and three “relative nobodies” - Adam D’Angelo, Tasha McCauley, and Helen Toner. None of them have financial stakes in the company. (All of this is pretty well explained on their website here if you’re interested)

OpenAI News from the Last Couple Weeks, Fuel for Various Hypotheses:

  1. OpenAI did their developer conference; for lots of people (including me) it was an astounding success but voices in the “theoretical AI community” were dismissing it as “commercial crap” and too far away from the “noble goals of building AGI”.
  2. OpenAI changed their values on their website (source) – now it’s strongly focused on AGI. It’s kind of weird to change values all of a sudden. Maybe it’s as simple as having a stronger appeal for top-notch scientists, which are the key resources for developing the algorithms and are very scarce. Or not.
  3. OpenAI was working on a financing round valuing the for-profit company at $86Bn (ouch) and using it as leverage to recruit top talent- giving them shares at the current valuation which is $29Bn.
  4. There was a security issue last week noted by Microsoft who restricted access to ChatGPT for a couple of hours however the whole story seems irrelevant (source).
  5. New subscriptions are not available at the moment– in theory because of high demand but some rumors say that the customGPT tech is not secure enough, rolled out too fast.
  6. Sam Altman hinted quite a bit about AGI lately multiple times in somewhat veiled terms.
  7. Sam Altman is an investor in Humane AI (the AI Pin) and is reportedly in talks with Jony Ive to create something similar.
  8. OpenAI has gotten trademarks for the brands “GPT5, GPT6, and GPT7” very recently.

Everyone interested in the matter understands that there is internal tension between “let’s go fast and release stuff” (Altman, supposedly) and “let’s go slowly and be careful not to unleash a malevolent genie from the machine” (Sutskever, supposedly). He has a couple of quotes like “the future looks good for AI, it would be great if it looks good for humans too” which are a bit chilling, coming from a guy who’s been working on this stuff forever.

For additional context, we need to remember that Anthropic was founded a couple of years ago by OpenAI researchers disagreeing with Altman’s vision. And they are now a significant player in the LLM space.

We should also note that Altman’s position is intellectually justified as “this stuff is so important and disruptive that we need to expose the world to it, even if it’s not perfect, and we need to start a global dialogue about it”.

Altman is already worth $500M and insisting that what he does at OpenAI should not be profit-driven … difficult to think that he is driven by financial greed.

What happened Friday?

  • Sam Altman was fired by the board. The reason was, basically, that he was lying to the board. About what, we don’t know.
  • Greg Brockman, who was the board president, was removed from the board. Asked to stay in the company, but he resigned after learning that Altman was fired.
  • Mira Murati was named interim CEO.
  • Other important researchers resigned.

So … why would the board do that?

Many ideas are floating on X, in articles, and on YouTube. Lots of them don’t make any sense to me but there is wild speculation!

What seems the most logical to me:

Ilya Sutskever got upset that Altman was pushing too much in a commercial direction, and focused on getting billions in cash from investors. He wanted to go back to more deep research and forget about CustomGPTs type of stuff. He had to vote himself against Altman as the required quorum was 4 people and Altman and Brockman would not vote for that, so he definitely is involved in the decision.

At the developer’s day, lots of new announcements were geared towards the enterprise world. Like supporting copyright lawsuits or offering full customization of the model for $2M. And CustomGPTs. Still, Altman could not pull all of this from his sleeve with the rest of the company (tech and product) knowing about it.

The tension between the “pure geek” and the “entrepreneur” is a very old story in Silicon Valley. Sutskever spent his life in labs, coding and thinking, and he's obviously some type of genius. Altman has supervised hundreds of startups at Y Combinator but does not have a post-doc in machine learning. It may become difficult to align their visions for the organization with such different backgrounds, and maybe forms of intelligence.

Let’s be a bit speculative, from the wild corners of the Xspace:

  1. Open AI has actually found a way to realize AGI and Altman wants to push it and the others don’t because of security reasons. Highly speculative – the consensus with experts is that LLMs are great but limited and AGI will require some additional algorithmic breakthroughs.
  2. Musk hates Altman and is playing behind the scenes and manipulated the board and Sutskever (who he hired at OpenAI, from Google) to kick him out.
  3. Microsoft is behind it. It does not make sense, if anything this stuff makes them look stupid and having poured billions into an uncontrollable partner. The announcement caused Microsoft stock to drop a couple billions more than their investment!
  4. Ok, and also "AGI has been achieved and took control of the company". Well, if it's the case, this artificial intelligence is pretty similar to human stupidity isn't it?

Who else would benefit from a blow to OpenAI? Google, obviously, as both are at war to recruit top talent, and the code is developed by this top talent. We don’t know who the board members have ties to. Sutskever did spend a bit of time at Google, and his mentor was Geoffrey Hinton, who after late recognition in the deep learning field (he did that when nobody cared and would find this interesting) working at Google is now very much on the cautious side of things.

What next?

Whatever the reason this happened, there is lots of downside for OpenAI.

  • Altman will do his own stuff that will compete with OpenAI. When you see that Mistral was founded earlier this year, got tons of money and has already released an LLM that gets lots of accolades, it does not take years if you have the right financing and connections. And he will poach lots of employees.
  • Will make hiring talent a tad more difficult.
  • Not sure the $86Bn valuation is still … valuable.
  • Customer trust erosion. If I’m a large enterprise customer, not sure I’m very comfortable with the company’s future direction, or lack of.
  • Microsoft is probably quite pissed, not very attractive to new investors.

So either Altman did something really, really bad, or it is a really, really stupid decision. The fact that they reinitiated talks with him to reinstate him would reinforce the second hypothesis.

Temporary conclusions

  • Governance and alignment are always key whether you’re in a regular company or the most visible startup in the world.
  • Humans are still humans, even in this AI age and are quite prone to irrational behavior. Fame, recognition, power, fear, money are still strong drivers. Until GPT-X becomes the CEO 😊.

ChatGPT conclusion (I did write the prompt, however)

Embracing Generative AI Amidst Industry Turbulence

The unfolding events at OpenAI serve as a stark reminder of the volatility inherent in the rapidly evolving AI sector. However, this instability should not deter companies from exploring and integrating generative AI into their operations. In an era where technological advancements are leapfrogging at an unprecedented pace, waiting on the sidelines could mean missing out on transformative opportunities.

Generative AI, despite its infancy, is already demonstrating profound impacts across various domains, from content creation to complex problem-solving. The current situation at OpenAI, while tumultuous, also highlights the intense innovation and fierce competition driving this field forward. It’s a clear indicator that generative AI is not just a fleeting trend but a foundational shift in how technology will shape our future.

For businesses, the key is to approach AI with a strategy that balances caution with experimentation. It’s about being agile enough to harness the potential of AI while being vigilant about the ethical and practical implications. In doing so, companies can not only navigate the uncertainties of today's AI landscape but also lay the groundwork for leveraging more advanced AI capabilities that emerge tomorrow.

In conclusion, while the AI industry may be in a state of flux, the potential rewards for early adopters are significant. Companies should view the current developments not as a deterrent, but as a clarion call to thoughtfully engage with generative AI, shaping its evolution and reaping its benefits in the process.



Kevin Pochat

Security Engineer chez OpenClassrooms

1y

Thanks Philippe Delanghe for the insights. A very interesting read about some very interesting events in a very interesting industry.

To view or add a comment, sign in

More articles by Philippe Delanghe

Insights from the community

Others also viewed

Explore topics