The biggest risk is not taking any risk. In a world that's changing really quickly, the only strategy that is guaranteed to fail is not taking risks

OpenAI creates risk prevention team

OpenAI assesses AI Risks Dwain Ross ★★★★

OpenAI has assembled a team dedicated to identifying and preventing risks associated with artificial intelligence. This could lead to the suspension of the launch of an AI model if it is deemed too dangerous.

This announcement comes just a month after the firing of the boss of ChatGPT conversational interface creator Sam Altman, who was reinstated a few days later.

According to several U.S. media outlets, board members criticized him for advocating for accelerated development of OpenAI, even if it meant dodging some questions about possible AI redundancies.

The readiness team will be led by computer scientist Alexander Madri. He has been working as a professor at MIT, but has taken a leave of absence. So says a post published by the academic on the X website (formerly Twitter) on Monday.

The group's work will be based on a working scheme defined in a document published on Monday. It also spells out the scope of activities and procedures.

The new group will focus on so-called "frontier models". They are currently under development, but have the potential to outperform the most advanced artificial intelligence programs.

"We believe that the scientific study of AI-related disaster risks is not perfected," OpenAI representatives explain in the document.

The creation of the framework should "help address this gap," they said. The team will evaluate each new model and assign it a risk level in four main categories.

The first relates to cybersecurity and the model's ability to carry out large-scale cyberattacks.

The second is the software's propensity to create chemical mixtures, organisms (such as a virus), or nuclear weapons that could harm humans.

The third category relates to the model's ability to persuade, that is, the extent to which it can influence human behavior.

The last category of risks concerns the potential autonomy of the model, such as determining whether the model can be exfiltrated. This would amount to the model being out of the control of the programmers who created it.

Once identified, the risks will be presented to the Safety Advisory Group (SAG), a new body that will make recommendations to Sam Altman or his designee. The head of OpenAI will then decide on the changes that need to be made to the model to mitigate the risks associated with it. The board will be kept informed and will be able to override management's decision.

Comments (0)