31.2 C
Miami
Sunday, May 18, 2025

Is the Altruistic OpenAI Gone? – Slashdot

- Advertisement -spot_imgspot_img
- Advertisement -spot_imgspot_img

“The altruistic OpenAI is gone, if it ever existed,” argues a new article in the Atlantic, based on interviews with more than 90 current and former employees, including executives. It notes that shortly before Altman’s ouster (and rehiring) he was “seemingly trying to circumvent safety processes for expediency,” with OpenAI co-founder/chief scientist Ilya telling three board members “I don’t think Sam is the guy who should have the finger on the button for AGI.” (The board had already discovered Altman “had not been forthcoming with them about a range of issues” including a breach in the Deployment Safety Board’s protocols.)

Adapted from the upcoming book, Empire of AI, the article first revisits the summer of 2023, when Sutskever (“the brain behind the large language models that helped build ChatGPT”) met with a group of new researchers:

Sutskever had long believed that artificial general intelligence, or AGI, was inevitable — now, as things accelerated in the generative-AI industry, he believed AGI’s arrival was imminent, according to Geoff Hinton, an AI pioneer who was his Ph.D. adviser and mentor, and another person familiar with Sutskever’s thinking…. To people around him, Sutskever seemed consumed by thoughts of this impending civilizational transformation. What would the world look like when a supreme AGI emerged and surpassed humanity? And what responsibility did OpenAI have to ensure an end state of extraordinary prosperity, not extraordinary suffering?

By then, Sutskever, who had previously dedicated most of his time to advancing AI capabilities, had started to focus half of his time on AI safety. He appeared to people around him as both boomer and doomer: more excited and afraid than ever before of what was to come. That day, during the meeting with the new researchers, he laid out a plan. “Once we all get into the bunker — ” he began, according to a researcher who was present.

“I’m sorry,” the researcher interrupted, “the bunker?”

“We’re definitely going to build a bunker before we release AGI,” Sutskever replied. Such a powerful technology would surely become an object of intense desire for governments globally. The core scientists working on the technology would need to be protected. “Of course,” he added, “it’s going to be optional whether you want to get into the bunker.” Two other sources I spoke with confirmed that Sutskever commonly mentioned such a bunker. “There is a group of people — Ilya being one of them — who believe that building AGI will bring about a rapture,” the researcher told me. “Literally, a rapture….”

But by the middle of 2023 — around the time he began speaking more regularly about the idea of a bunker — Sutskever was no longer just preoccupied by the possible cataclysmic shifts of AGI and superintelligence, according to sources familiar with his thinking. He was consumed by another anxiety: the erosion of his faith that OpenAI could even keep up its technical advancements to reach AGI, or bear that responsibility with Altman as its leader. Sutskever felt Altman’s pattern of behavior was undermining the two pillars of OpenAI’s mission, the sources said: It was slowing down research progress and eroding any chance at making sound AI-safety decisions.
“For a brief moment, OpenAI’s future was an open question. It might have taken a path away from aggressive commercialization and Altman. But this is not what happened,” the article concludes. Instead there was “a lack of clarity from the board about their reasons for firing Altman.” There was fear about a failure to realize their potential (and some employees feared losing a chance to sell millions of dollars’ worth of their equity).

“Faced with the possibility of OpenAI falling apart, Sutskever’s resolve immediately started to crack… He began to plead with his fellow board members to reconsider their position on Altman.” And in the end “Altman would come back; there was no other way to save OpenAI.”


To me, the drama highlighted one of the most urgent questions of our generation: How do we govern artificial intelligence? With AI on track to rewire a great many other crucial functions in society, that question is really asking: How do we ensure that we’ll make our future better, not worse? The events of November 2023 illustrated in the clearest terms just how much a power struggle among a tiny handful of Silicon Valley elites is currently shaping the future of this technology. And the scorecard of this centralized approach to AI development is deeply troubling. OpenAI today has become everything that it said it would not be….

The author believes OpenAI “has grown ever more secretive, not only cutting off access to its own research but shifting norms across the industry to no longer share meaningful technical details about AI models…”

“At the same time, more and more doubts have risen about the true economic value of generative AI, including a growing body of studies that have shown that the technology is not translating into productivity gains for most workers, while it’s also eroding their critical thinking.”

Source link

- Advertisement -spot_imgspot_img

Highlights

- Advertisement -spot_img

Latest News

- Advertisement -spot_img