A recent investigation by Rolling Stone, citing research from Alethea, explores how coordinated bot networks and AI-generated accounts amplified misleading narratives tied to the Epstein files across X, and illustrates how quickly synthetic and automated activity can shape online discourse around politically sensitive topics. 

The reporting draws on analysis from Alethea and data from our AI risk management platform, Artemis, which uncovered at least 400 bogus X profiles involved in the campaign and identified patterns of coordinated inauthentic behavior and network amplification.

The AI bot network began to fracture when conflicting narratives about the Epstein files circulated within the same pro-Trump information ecosystem. Because many of the bots were powered by automated content generation and scraping from different partisan sources, some accounts began promoting contradictory claims about the files’ contents and significance. As a result, the bots started arguing with and amplifying rebuttals to one another, exposing the network’s automated nature and revealing how loosely coordinated AI-driven propaganda systems can spiral into self-contradiction when the underlying narratives diverge.

In short: the same automation that allowed the network to scale also caused it to collapse into infighting once the messaging ecosystem split, making the bot activity easier for researchers to identify and map.

The findings highlight a broader challenge in today’s information environment: automated and AI-enabled networks can accelerate the visibility and perceived legitimacy of misleading claims before their origins or intent are fully understood. By combining advanced analytics with expert investigation, Alethea helps organizations detect these coordinated influence operations early, understand the actors and narratives behind them, and strengthen their AI risk management strategies in an increasingly automated digital landscape.

Read more here.


Share this story: