OpenAI blocks Kremlin-linked propaganda network using ChatGPT
Illustrative photo: OpenAI blocks large-scale Russian propaganda network (Getty Images)
OpenAI has blocked accounts linked to Russia’s propaganda network Rybar as part of an operation codenamed "Fish Food," according to an OpenAI report.
According to the company, propagandists created texts for posts and comments to be published on social media on behalf of Rybar, as well as texts that allegedly came from different parts of the world. In addition, AI video tool Sora was reportedly used to develop plans for information and psychological operations (PSYOPs) and related video materials.
"We banned a set of ChatGPT accounts that were linked to the Rybar network on Telegram and X. At least some of the accounts likely originated in Russia. The network generated content that was posted across the internet, sometimes by Rybar-branded accounts, and sometimes by social media accounts that bore no declared relationship to Rybar," the report says.
Praise for Russia and criticism of Ukraine
OpenAI noted that while the blocked users communicated in Russian, they produced content in multiple languages, including English and Spanish. The propaganda materials praised Russia and its allies, including Belarus, while criticizing Ukraine and Western countries.
One of the banned accounts attempted to design information and psychological operations aimed at interfering in political processes and elections in Africa.
According to the findings, the network worked on building a network of agents and planning large-scale events and protests in the Democratic Republic of the Congo, Cameroon, Burundi, and Madagascar.
The reported budget for propaganda activities in Africa reached $600,000. The report also recalled that the United States had previously announced a reward of up to $10 million for information related to the Rybar network.
Russian propaganda tactics
Ukraine’s Center for Countering Disinformation has repeatedly warned that Russian propaganda systematically attempts to discredit Ukraine and its army by spreading fabricated claims linking Ukrainian forces to criminal incidents.
Propagandists use artificial intelligence tools to create fake videos allegedly featuring Ukrainian soldiers. AI-generated videos are appearing on social media in which individuals posing as military personnel emotionally describe the lack of food and equipment.