Elon Musk’s X has blocked searches for Taylor Swift after sexually explicit images of the pop star created using artificial intelligence spread widely on the platform.
The incident is the latest example of how social media groups are scrambling to tackle so-called deepfakes: realistic images and audio, generated using AI, that can be abused to portray prominent individuals in compromising or misleading situations without their consent.
Any searches for terms such as “Taylor Swift” or “Taylor AI” on X returned an error message for several hours over the weekend, after AI-generated pornographic images of the singer proliferated online in the past few days. The change means that even legitimate content about one of the world’s most popular stars is harder to view on the site.
“This is a temporary action and done with an abundance of caution as we prioritise safety on this issue,” Joe Benarroch, head of business operations at X, said.
Swift has not publicly commented on the matter.
X was bought for $44bn in October 2022 by billionaire entrepreneur Musk, who has cut back on resources dedicated to policing content and loosened its moderation policies, citing his free speech ideals.
Its use of the blunt moderation mechanism this weekend comes as X and its rivals Meta, TikTok and Google’s YouTube face mounting pressure to tackle abuse of increasingly realistic and easy-to-access deepfake technology. A brisk market of tools has emerged that allows anyone to use generative AI to create a video or image in the likeness of a celebrity or politician in several clicks.
Though deepfake technology has been available for several years, recent advances in generative AI have made the images easier to create and more realistic. Experts warn that fake pornographic imagery is one of the most common emerging abuses of deepfake technology, and also point to its increasing use in political disinformation campaigns during a year of elections around the world.
In response to a question about the Swift images on Friday, White House press secretary Karine Jean-Pierre said the circulation of the false images was “alarming”, adding: “While social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing their own rules.” She urged Congress to legislate on the matter.
On Wednesday, social media executives including X’s Linda Yaccarino, Meta’s Mark Zuckerberg and TikTok’s Shou Zi Chew will face questioning at a US senate judiciary committee hearing on child sexual exploitation online, following mounting concerns that their platforms do not do enough to protect children.
On Friday, X’s official safety account said in a statement that posting “Non-Consensual Nudity (NCN) images” was “strictly prohibited” on the platform, which has a “zero-tolerance policy towards such content”.
It added: “Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them. We’re closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed.”
However, X’s depleted content moderation resources have been unable to stop the faked Swift images from being viewed millions of times before removal, forcing the company to resort to blocking searches of one of the world’s biggest stars.
A report by technology news site 404 Media found that the images appeared to originate on anonymous bulletin board 4chan and a group on messaging app Telegram, dedicated to the sharing of abusive AI-generated images of women, often made with a Microsoft tool.
Microsoft said it was still investigating the images, but had “strengthened our existing safety systems to prevent our services from being used to help generate images like them”.
Telegram did not immediately respond to requests for comment.