OpenAI makes guidelines to assess potential AI 'catastrophic' risks
The creator of ChatGPT, an OpenAI company, released the latest guidelines to evaluate "catastrophic risks" associated with artificial intelligence in ongoing developments, according to Barron's.
In the newly published "Preparedness Framework," OpenAI believes that the scientific review of catastrophic AI risks has been inadequate and aims to bridge this gap through the outlined framework.
"We believe the scientific study of catastrophic risks from AI has fallen far short of where we need to be," the statement says.
A monitoring team will assess "frontier models" with capabilities surpassing current advanced AI software. Each model will be evaluated and assigned a risk level, ranging between "low" and "critical," across four key categories.
The first category explores cybersecurity, specifically the model's potential for managing large-scale cyberattacks.
The second assesses the software's likelihood of contributing to the creation of harmful substances such as chemical mixtures, organisms (e.g., viruses), or nuclear weapons.
The third category addresses the persuasive influence of the model, measuring its ability to impact human behavior.
The final category analyzes the model's autonomy, especially its potential to surpass the control of its creators.
Models with a risk score exceeding "medium" cannot be deployed, according to the framework. The risks will be submitted to the newly established Safety Advisory Group at OpenAI. The head of OpenAI will ultimately decide any necessary adjustments to mitigate risks.
Risks AI poses
This framework follows the temporary removal and subsequent reappointment of CEO Sam Altman. During the scandal, 514 out of 700 OpenAI employees who created ChatGPT threatened dismissal as they demanded the reinstatement of Altman and called for the board members' resignation.
Sam Altman (Getty Images)
Media reports suggest that Altman faced criticism from board members for prioritizing the rapid upgrade of OpenAI, potentially ignoring concerns about the risks the technology poses.
Read more about the ChatGPT phenomenon in the article on RBC-Ukraine.