Report: OpenAI Researchers Warned Board of AI Significant Breakthrough Ahead of CEO’s Firing

Last week, I reported that OpenAI CEO Sam Altman was reinstated, successfully reversing his ouster by Artificial Intelligence (AI) firm’s board last week after a campaign waged by his allies. This move concluded five days of high-stakes corporate drama that had Microsoft offering to hire Altman and most of the OpenAI staff threatening to quit.

The motivation behind the firing in the first place may be coming to light. According to a Reuters report, the move may have been based out of concerns over rapid commercialization . . . as well as a significant breakthrough in AI capabilities.

The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman’s firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend’s events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup’s search for what’s known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks….In their letter to the board, researchers flagged AI’s prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest.

The news about a breakthrough aligns with events that preceded the dramatic week of firing and rehiring.

Many experts are concerned that companies such as OpenAI are moving too fast towards developing artificial general intelligence (AGI), the term for a system that can perform a wide variety of tasks at human or above human levels of intelligence – and which could, in theory, evade human control.Andrew Rogoyski, of the Institute for People-Centred AI at the University of Surrey, said the existence of a maths-solving large language model (LLM) would be a breakthrough. He said: “The intrinsic ability of LLMs to do maths is a major step forward, allowing AIs to offer a whole new swathe of analytical capabilities.”Speaking on Thursday last week, the day before his surprise sacking, Altman indicated that the company behind ChatGPT had made another breakthrough.In an appearance at the Asia-Pacific Economic Cooperation (Apec) summit, he said: “Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honour of a lifetime.”

Tags: Artificial Intelligence (AI)

CLICK HERE FOR FULL VERSION OF THIS STORY