But that success raised tensions inside the company. Ilya Sutskever, a respected A.I. researcher who co-founded OpenAI with Mr. Altman and nine other people, was increasingly worried that OpenAI’s technology could be dangerous and that Mr. Altman was not paying enough attention to that risk, according to three people familiar with his thinking. Mr. Sutskever, a member of the company’s board of directors, also objected to what he saw as his diminished role inside the company, according to two of the people.
That conflict between fast growth and A.I. safety came into focus on Friday afternoon, when Mr. Altman was pushed out of his job by four of OpenAI’s six board members, led by Mr. Sutskever. The move shocked OpenAI employees and the rest of the tech industry, including Microsoft, which has invested $13 billion in the company. Some industry insiders were saying the split was as significant as when Steve Jobs was forced out of Apple in 1985.
But on Saturday, in a head-spinning turn, Mr. Altman was said to be in discussions with OpenAI’s board about returning to the company.
The ouster on Friday of Mr. Altman, 38, drew attention to a longtime rift in the A.I. community between people who believe A.I. is the biggest business opportunity in a generation and others who worry that moving too fast could be dangerous. And the vote to remove him showed how a philosophical movement devoted to the fear of A.I. had become an unavoidable part of tech culture.
Since ChatGPT was released almost a year ago, artificial intelligence has captured the public’s imagination, with hopes that it could be used for important work like drug research or to help teach children. But some A.I. scientists and political leaders worry about its risks, such as jobs getting automated out of existence or autonomous warfare that grows beyond human control.
Fears that A.I. researchers were building a dangerous thing have been a fundamental part of OpenAI’s culture. Its founders believed that because they understood those risks, they were the right people to build it. (...)
In recent weeks, Jakub Pachocki, who helped oversee GPT-4, the technology at the heart of ChatGPT, was promoted to director of research at the company. After previously occupying a position below Mr. Sutskever, he was elevated to a position alongside Mr. Sutskever, according to two people familiar with the matter.
Mr. Pachocki quit the company late on Friday, the people said, soon after Mr. Brockman. Earlier in the day, OpenAI said Mr. Brockman had been removed as chairman of the board and would report to the new interim chief executive, Mira Murati. Other allies of Mr. Altman — including two senior researchers, Szymon Sidor and Aleksander Madry — have also left the company.
Mr. Brockman said in a post on X, formerly Twitter, that even though he was the chairman of the board, he was not part of the board meeting where Mr. Altman was ousted. That left Mr. Sutskever and three other board members: Adam D’Angelo, chief executive of the question-and-answer site Quora; Tasha McCauley, an adjunct senior management scientist at the RAND Corporation; and Helen Toner, director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology.
They could not be reached for comment on Saturday.
Ms. McCauley and Ms. Toner have ties to the Rationalist and Effective Altruist movements, a community that is deeply concerned that A.I. could one day destroy humanity. Today’s A.I. technology cannot destroy humanity. But this community believes that as the technology grows increasingly powerful, these dangers will arise.
In 2021, a researcher named Dario Amodei, who also has ties to this community, and about 15 other OpenAI employees left the company to form a new A.I. company called Anthropic.
Mr. Sutskever was increasingly aligned with those beliefs. Born in the Soviet Union, he spent his formative years in Israel and emigrated to Canada as a teenager. As an undergraduate at the University of Toronto, he helped create a breakthrough in an A.I. technology called neural networks.
In 2015, Mr. Sutskever left a job at Google and helped found OpenAI alongside Mr. Altman, Mr. Brockman and Tesla’s chief executive, Elon Musk. They built the lab as a nonprofit, saying that unlike Google and other companies, it would not be driven by commercial incentives. They vowed to build what is called artificial general intelligence, or A.G.I., a machine that can do anything the brain can do.
Mr. Altman transformed OpenAI into a for-profit company in 2018 and negotiated a $1 billion investment from Microsoft. Such enormous sums of money are essential to building technologies like GPT-4, which was released earlier this year. Since its initial investment, Microsoft has put another $12 billion into the company.
The company was still governed by the nonprofit board. Investors like Microsoft do receive profits from OpenAI, but their profits are capped. Any money over the cap is funneled back into the nonprofit.
As he saw the power of GPT-4, Mr. Sutskever helped create a new Super Alignment team inside the company that would explore ways of ensuring that future versions of the technology would not do harm.
Mr. Altman was open to those concerns, but he also wanted OpenAI to stay ahead of its much larger competitors.
by Cade Metz, NY Times | Read more:
Image: From left, Mira Murati, interim chief executive; Sam Altman, ousted chief executive; Greg Brockman, OpenAI’s president and Board member; Ilya Sutskever, also on the company’s board. Credit: Jim Wilson/The New York Times[ed. Wow. For readers not familiar with the Rationalist/Effective Altruist community (Sam Bankman-Fried was an advocate) visit Less Wrong. There's an open Sam Altman thread: Sam Altman fired from OpenAI, and others, like: Altman firing retaliation incoming?, and this shortform, with details I was unware of: ]
***
"What's the situation? In the USA: Musk's xAI announced Grok to the world two weeks ago, after two months of training. Meta disbanded its Responsible AI team. Google's Gemini is reportedly to be released in early 2024. OpenAI has confused the world with its dramatic leadership spasm, but GPT-5 is on the way. Google and Amazon have promised billions to Anthropic.
In Europe, France's Mistral and Germany's Aleph Alpha are trying to keep the most powerful AI models unregulated. China has had regulations for generative AI since August, but is definitely aiming to catch up to America. Russia has GigaChat and SistemmaGPT, the UAE has Falcon. I think none of these are at GPT-4's level, but surely some of them can get there in a year or two.
Very few players in this competitive landscape talk about AI as something that might rule or replace the human race. Despite the regulatory diplomacy that also came to life this year, the political and economic elites of the world are on track to push AI across the threshold of superintelligence, without any realistic sense of the consequences."
"OpenAI’s investors are making efforts to bring back Sam Altman, the chief executive who was ousted Friday, said people familiar with the matter, the latest development in a fast-moving chain of events at the artificial-intelligence company behind ChatGPT.
Altman is considering returning but has told investors that he wants a new board, the people said. He has also discussed starting a company that would bring on former OpenAI employees, and is deciding between the two options, the people said.
Altman is expected to decide between the two options soon, the people said. Leading shareholders in OpenAI, including Microsoft and venture firm Thrive Capital, are helping orchestrate the efforts to reinstate Altman. Microsoft invested $13 billion into OpenAI and is its primary financial backer. Thrive Capital is the second-largest shareholder in the company."
Altman is considering returning but has told investors that he wants a new board, the people said. He has also discussed starting a company that would bring on former OpenAI employees, and is deciding between the two options, the people said.
Altman is expected to decide between the two options soon, the people said. Leading shareholders in OpenAI, including Microsoft and venture firm Thrive Capital, are helping orchestrate the efforts to reinstate Altman. Microsoft invested $13 billion into OpenAI and is its primary financial backer. Thrive Capital is the second-largest shareholder in the company."
***
"It may be that the race is not always to the swift, nor the battle to the strong — but that’s the way to bet."
So far as we know, it seems to be: pro-alignment faction in OpenAI were concerned, managed to oust Altman, alarums and excursions ensued, Microsoft nearly lost their lives, Altman is now getting (it's fair to imagine) whatever the hell set up he wants at Microsoft - all in order to keep on track to get AI out in a commercial product that will give Microsoft market monopoly as the first to get there and turn on the eternal money fountain.
And that's reality, folks.(ACX)