AI developers obligated to inform U.S. government about safety test results
The U.S. government, led by President Joe Biden, presses tech companies that build large artificial intelligence systems to report to the authorities how safe they are. White House set a rule that AI companies must share important safety test results with the Commerce Department, according to AP News and The Messenger Tech.
This is part of a plan that President Biden started three months ago to handle the fast-growing technology. He signed an executive order three months ago that said that any company working on "developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety" must inform the government.
The White House AI Council is going to check on the progress of this plan. One of the main goals within the first 90 days was to make AI companies share vital information, including safety tests.
Ben Buchanan, who is responsible for AI at the White House, says that the government aims to ensure that AI systems meet safety standards before they are released to the public. "The president has been very clear that companies need to meet that bar,” he said.
As part of this plan, the companies have agreed on certain categories for safety tests. However, there is no common standard for these tests yet.
Federal agencies, responsible for Defense, Transportation, Treasury, and Health and Human Services, have assessed the risks associated with using AI in critical national infrastructure, such as the electric grid. Federal agencies have also boosted the hiring of data scientists and AI experts to better manage this transformative technology.
“We know that AI has transformative effects and potential,” Buchanan said. “We’re not trying to upend the apple cart there, but we are trying to make sure the regulators are prepared to manage this technology.”
Safety concerns around fast-growing AI technology
After the Microsoft-backed OpenAI company introduced ChatGPT, an AI-powered chatbot, a wave of similar developments arose, including Google Bard, Google Gemini, and Samsung's AI Gauss.
In December 2023, the creator of ChatGPT, an OpenAI company, released the latest guidelines to evaluate "catastrophic risks" associated with artificial intelligence in ongoing developments. This framework followed the temporary removal and subsequent reappointment of CEO Sam Altman as he faced criticism from board members for prioritizing the rapid upgrade of OpenAI, potentially ignoring concerns about the risks the technology poses.
Artificial intelligence has become a significant concern for governments worldwide, both economically and for national security. The International Monetary Fund predicted that almost 40% of jobs worldwide could be replaced by artificial intelligence. In more developed countries, artificial intelligence could affect 60% of jobs, while in emerging and low-income countries, AI is expected to affect 40% and 26% of jobs, respectively.