Jan Leike, a key OpenAI researcher who resigned earlier this week following the departure of co-founder Ilya Sutskever, posted on X Friday morning that “security tradition and processes have taken a backseat to shiny merchandise” on the firm.
Leike’s statements got here after Wired reported that OpenAI had disbanded the staff devoted to addressing long-term AI dangers (known as the “Superalignment staff”) altogether. Leike had been working the Superalignment staff, which formed last July to “remedy the core technical challenges” in implementing security protocols as OpenAI developed AI that may motive like a human.
The unique thought for OpenAI was to brazenly present their fashions to the general public, therefore the group’s title, however they’ve change into proprietary data because of the firm’s claims that permitting such highly effective fashions to be accessed by anybody could possibly be doubtlessly damaging.
“We’re lengthy overdue in getting extremely critical concerning the implications of AGI. We should prioritize making ready for them as greatest we are able to,” Leike stated in follow-up posts about his resignation Friday morning. “Solely then can we guarantee AGI advantages all of humanity.”
The Verge reported earlier this week that John Schulman, one other OpenAI co-founder who supported Altman throughout final yr’s unsuccessful board coup, will assume Leike’s tasks. Sutskever, who performed a key function within the infamous failed coup in opposition to Sam Altman, announced his departure on Tuesday.
“Over the previous years, security tradition and processes have taken a backseat to shiny merchandise,” Leike posted.
Leike’s posts spotlight an rising pressure inside OpenAI. As researchers race to develop synthetic common intelligence whereas managing client AI merchandise like ChatGPT and DALL-E, workers like Leike are elevating issues concerning the potential risks of making super-intelligent AI fashions. Leike stated his staff was deprioritized and couldn’t get compute and different sources to carry out “essential” work.
“I joined as a result of I believed OpenAI can be the most effective place on the planet to do that analysis,” Leike wrote. “Nonetheless, I’ve been disagreeing with OpenAI management concerning the firm’s core priorities for fairly a while, till we lastly reached a breaking level.”