Searches for Taylor Swift on X seem to be blocked following the circulation of explicit AI-generated images of the star online. 

When fans attempted to look up the popular songwriter’s name on the social media platform on Saturday, all they encountered was an error message that simply reads, ‘Posts aren’t loading right now. Try again.’ 

The cause of this issue remains unclear, but the timing of the block may be linked to a series of AI-generated graphic images depicting Swift in a series of sexual acts while dressed in Kansas City Chief memorabilia and in the stadium. 

Swift has been a regular at Chiefs games since going public with her romance with star player Travis Kelce.  

As of Saturday afternoon, users can still bypass the issue by enclosing the superstar’s name in quotation marks or searching for it in the media section of the platform. 

Ironically, the media tab is where all the highly explicit AI-generated pictures initially surfaced before X started suspending accounts that had reshared them last week. 

Searches for Taylor Swift on X seem to be blocked following the circulation of explicit AI-generated images of the star online 

When fans attempted to look up the popular songwriter’s name on the social media platform on Saturday, all they encountered was an error message

As of Saturday afternoon, users can still bypass the issue by enclosing the superstar’s name in quotation marks or searching for it in the media section of the platform

Casey Newton, founder and editor of the technology newsletter Platformer, noticed the block and shared it on Thread. 

He wrote: ‘X is currently not showing *any* search results for “taylor swift,” which I guess was the only real option left after it got rid of almost its entire trust and safety team.’ 

DailyMail.com has reached out to X for comments and further information.  

The singer has been furious about the images and is considering legal action against the sick deepfake porn site hosting them, DailyMail.com revealed exclusively earlier this week. 

The pictures were initially uploaded to Celeb Jihad, a site that that flouts state porn laws and continues to outrun cybercrime squads. 

They were soon spread on X, Facebook, Instagram and Reddit. X and Reddit started removing the posts on Thursday morning after DailyMail.com alerted them to some of the accounts. 

A source close to Swift said: ‘Whether or not legal action will be taken is being decided but there is one thing that is clear: these fake AI generated images are abusive, offensive, exploitative, and done without Taylor’s consent and knowledge. 

The obscene images are themed around Swift’s fandom of the Kansas City Chiefs, which began after she started dating star player Travis Kelce

Swift pictured leaving Nobu restaurant after dining with Brittany Mahomes, wife of Kansas City Chiefs quarterback Patrick Mahomes 

They said: ‘The Twitter account that posted them does not exist anymore. It is shocking that the social media platform even let them be up to begin with. 

‘These images must be removed from everywhere they exist and should not be promoted by anyone. 

‘Taylor’s circle of family and friends are furious, as are her fans obviously. They have the right to be, and every woman should be. 

‘The door needs to be shut on this. Legislation needs to be passed to prevent this and laws must be enacted.’

The abhorrent sites hide in plain sight, seemingly cloaked by proxy IP addresses. 

X posted a statement nearly a day after the images started being posted, saying: ‘Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them.’

A Meta spokesman told DailyMail.com: ‘This content violates our policies and we’re removing it from our platforms and taking action against accounts that posted it. 

‘We’re continuing to monitor and if we identify any additional violating content we’ll remove it and take appropriate action.’ 

X posted a statement nearly a day after the images started being posted, saying: ‘Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them’

Explicit AI-generated material that overwhelmingly harms women and children and is booming online at an unprecedented rate. 

According to an analysis by independent researcher Genevieve Oh that was shared with The Associated Press in December, more than 143,000 new deepfake videos were posted online this year, which surpasses every other year combined. 

Desperate for solutions, affected families are pushing lawmakers to implement robust safeguards for victims whose images are manipulated using new AI models, or the plethora of apps and websites that openly advertise their services. 

Advocates and some legal experts are also calling for federal regulation that can provide uniform protections across the country and send a strong message to current and would-be perpetrators. 

The problem with deepfakes isn’t new, but experts say it’s getting worse as the technology to produce it becomes more available and easier to use. 

Researchers have been sounding the alarm this year on the explosion of AI-generated child sexual abuse material using depictions of real victims or virtual characters.