Did you catch our latest coverage of U.S. Spanish-language viral reactions to the halftime show? Sign up for REDESCover for weekly updates on content breaking out on social media and messaging apps among Latinos.


The renewed attention around the Epstein Files was never going to stay confined to courtrooms or legal briefs. Once the U.S. Department of Justice released more than three million additional pages of records on January 30, the internet did what it does best: it went wild.

Many conspiracy theorists felt validated, and a slew of new and recycled outlandish theories in English and Spanish were mainstreamed online instantly (think “Avicii and Justin Bieber warned us about Epstein" or “Brittany Murphy was murdered to hide what she knew”). 

The Epstein case has every ingredient that drives virality on social media: shock, outrage, and moral horror. Children, and some of our most powerful leaders, are central to the story, heightening emotional response and shrinking public tolerance for uncertainty or delay. Even without algorithms, that would guarantee attention.

But algorithms, with information overload and a lack of trust or clarity, are breeding misinformation, pushing our essentially borderless media ecosystems, and our patience for BS, to the brink of breaking.

Thousands of pages of documents are now circulating widely online, yet many of the most consequential details remain unclear or heavily redacted. Names appear without explanation. Timelines are fragmented. Key portions are withheld. For most people encountering this story through social media, the result is pure confusion, an overwhelming flood of information with little guidance on what is verified, unresolved, or simply false. In that environment, information, or a lack thereof, is quickly interpreted as proof that some “truth is being hidden.” 

Social media platforms then supercharge suspicion. Once a user engages with Epstein-related content, recommendation systems serve up increasingly speculative interpretations, rewarding emotional intensity rather than evidence. What begins as curiosity can quickly turn into a downward spiral into the rabbit hole. My Instagram algorithm alone showed me over 25 conspiracy videos in the 10 minutes after I clicked on one single related link.

This dynamic is further intensified because the case implicates elites. Research from the Digital Democracy Institute of the Americas (DDIA) shows just how primed the public already is for elite-focused conspiracy narratives. In DDIA’s 2024 polling of U.S. Latinos on disinformation and democracy, misleading claims about elites were among the most widely seen and widely believed. The single most believed claim tested that year was that “Trump was on ‘Epstein’s list.’” At the time, his name had not yet appeared in the files—yet more than half of respondents who encountered it believed it was there.

This pattern doesn’t exist in a vacuum. Large portions of the public already believe a “Deep State” operates against ordinary people, that corporations control politics, and that elites collude with media and tech companies to suppress the truth. Those beliefs form the backdrop against which the Epstein Files are now being interpreted.

None of this means public concern is misguided. Transparency, accountability, and justice MUST be done. But it also means that at this moment the Epstein Files are a stress test for public trust, tech, and democracy. How information is released, and how it is amplified, will shape what people believe about this case, about our institutions, and about accountability more broadly.

That raises uncomfortable questions about responsibility, especially in the age of AI.

Social media platforms are not neutral bystanders. Design choices such as recommendation algorithms, ranking systems, and engagement metrics propel the most shocking or outlandish interpretations to the top. This has clear policy implications. Greater transparency around recommendation systems, clearer standards for handling high-risk information events, and enforceable obligations to assess and mitigate systemic risks and the use of AI are long overdue. Without them, platforms will continue to act as accelerants during moments of uncertainty, spreading distrust, including in whatever remains credible, at scale.

Supporting influencers in their stewardship of information also matters greatly. In cases already primed for suspicion, assuring influencers have the tools and knowhow to quickly verify facts and separate those from open questions is essential, because speed and virality rarely take a backseat to transparency and sourcing, especially for creators who do not typically cover investigative or political news.

Finally, this moment underscores the need for stronger collaboration between content creators, reporters, researchers, and trusted community voices. Journalists can benefit from anticipating how stories may be misused or misread online. Influencers can help redirect attention toward verified reporting, survivor-centered perspectives, and explanations of systemic accountability.

In an information environment built for distrust, credibility isn’t earned by releasing more information. It’s earned by authenticity and explanations. What we learn from covering the Epstein Files will undoubtedly contribute to trust in media, institutions, and the very idea that truth still matters for years to come.