Unheard Voice
Unheard Voice
Stanford Internet Observatory collaborated with Graphika to analyze a large network of accounts removed from Facebook, Instagram, and Twitter in our latest report. This information operation likely originated in the United States and targeted a range of countries in the Middle East and Central Asia.
In July and August 2022, Twitter and Meta removed two overlapping sets of accounts for violating their platforms’ terms of service. Twitter said the accounts fell foul of its policies on “platform manipulation and spam,” while Meta said the assets on its platforms engaged in “coordinated inauthentic behavior.” After taking down the assets, both platforms provided portions of the activity to Graphika and the Stanford Internet Observatory for further analysis.
Our joint investigation found an interconnected web of accounts on Twitter, Facebook, Instagram, and five other social media platforms that used deceptive tactics to promote pro-Western narratives in the Middle East and Central Asia. The platforms’ datasets appear to cover a series of covert campaigns over a period of almost five years rather than one homogeneous operation.
These campaigns consistently advanced narratives promoting the interests of the United States and its allies while opposing countries including Russia, China, and Iran. The accounts heavily criticized Russia in particular for the deaths of innocent civilians and other atrocities its soldiers committed in pursuit of the Kremlin’s “imperial ambitions” following its invasion of Ukraine in February this year. A portion of the activity also promoted anti-extremism messaging.
We believe this activity represents the most extensive case of covert pro-Western influence operations on social media to be reviewed and analyzed by open-source researchers to date. With few exceptions, the study of modern influence operations has overwhelmingly focused on activity linked to authoritarian regimes in countries such as Russia, China, and Iran, with recent growth in research on the integral role played by private entities. This report illustrates the much wider range of actors engaged in active operations to influence online audiences.
At the same time, Twitter and Meta’s data reveals the limited range of tactics influence operation actors employ; the covert campaigns detailed in this report are notable for how similar they are to previous operations we have studied. The assets identified by Twitter and Meta created fake personas with GAN-generated faces, posed as independent media outlets, leveraged memes and short-form videos, attempted to start hashtag campaigns, and launched online petitions: all tactics observed in past operations by other actors.
Importantly, the data also shows the limitations of using inauthentic tactics to generate engagement and build influence online. The vast majority of posts and tweets we reviewed received no more than a handful of likes or retweets, and only 19% of the covert assets we identified had more than 1,000 followers.