top of page
  • almediaghofficial

Taylor Swift's Privacy Violation: X Blocks Searches After Explicit AI Images Go Viral


Taylor Swift
Taylor Swift: Getty Images

X, a social media platform, has disabled Taylor Swift searches following the circulation of obscene AI-generated photos of the singer.


The head of business operations at X, Joe Benarroch, told the BBC that the move was a "temporary action" to put safety first.

An error message stating "Something went wrong" is displayed while trying to search for Swift on the website. Reloading may help.


An earlier version of this week's post featured fabricated, explicit photos of the singer. Some of them reached millions of views, which alarmed both US officials and the singer's legion of followers.


Using the term "protect Taylor Swift," her supporters filled the platform with authentic photos and videos of her, flagging posts and accounts that shared the false images. Photos of people sharing non-consensual nudity on the site are "strictly prohibited," according to a statement released on Friday by X (previously Twitter).


What this means is that "we have a zero-tolerance policy towards such content," says the statement. "Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them." X has not made it apparent when they started policing Swift searches or if they have ever done so for any other famous people or topics.


The move is being made "with an abundance of caution as we prioritize safety on this issue," Mr. Benarroch wrote in an email he sent to the BBC. The White House took notice of the situation and deemed the dissemination of the AI-generated images "alarming" on Friday.


During a briefing, White House press secretary Karine Jean-Pierre said, "We know that lax enforcement disproportionately impacts women, and it also impacts girls, sadly, who are the overwhelming targets." Platforms should also act to prohibit such content on their sites, and laws should be enacted to address the misuse of AI on social media, she continued.


"We believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation and non-consensual, intimate imagery of real people," Jean-Pierre stated.


Legislators in the United States have also demanded new legislation to make it a crime to create fake images. A deepfake is a video of someone who uses AI to alter their appearance in some way. Research conducted in 2023 revealed that the production of altered photographs has increased by 550% since 2019, driven by the advent of artificial intelligence. While state governments have taken action to combat the spread of fake photos, no such legislation exists at the federal level.


In 2023, the UK's Online Safety Act made it unlawful to share deep-fake pornography.








Source: BBC

2 views1 comment

1 Comment


Guest
Feb 06

Wow!

Like
bottom of page