ua en ru

Taylor Swift explicit images raise concerns over AI-generated deepfakes

Taylor Swift explicit images raise concerns over AI-generated deepfakes Taylor Swift (Getty Images)

Fake images of nude Taylor Swift, likely made by AI tools, quickly spread on social media this week. Fans were upset, while lawmakers are now urging to protect women and take action against the technology and platforms responsible for spreading such content, according to The New York Times and CNN.

The images originally appeared on a messaging app called Telegram, but they gained more attention when posted on X and other social media. One of the images, posted on X, had 47 million views before the account was suspended. X stopped several accounts from sharing these fake images, but they kept spreading on other platforms.

X states it maintains a strict policy against such content and is actively removing the images. However, critics say that X has seen more problematic content since Elon Musk bought it in 2022, and the rules have become more relaxed.

A cybersecurity company Reality Defender says the images were likely created using a type of artificial intelligence called a diffusion model. This technology is available through more than 100,000 apps and public models.

AI-generated deepfakes

Taylor Swift's case attracted public attention to a broader problem that is becoming even more concerning. As artificial intelligence has become more popular, tools allowing users to create fake images and videos have also become widespread. These tools make it easier and cheaper to create deepfakes, meaning fake videos, or images of people doing or saying things they never did.

Now, researchers are worried that deepfakes could be used for spreading misinformation, like creating fake nude images or embarrassing portrayals of political figures. The problem is getting worse, especially considering how easily now the public could be misled by fake populist or hate speeches coming from deepfakes of prominent politicians.

Deepfakes could also be used as part of disinformation campaigns during wartime. For example, Russia is spreading fake videos of Ukraine's military and political leadership online, trying to undermine the nation's trust in authorities.

9 U.S. states have tried to restrict fake explicit and political content made by AI, but these rules have not been very effective so far. There are no federal regulations on this issue. Lawmakers are now renewing their calls for action. They say that AI can be used to create inappropriate images without consent, violating fundamental human rights.