On February 12, 2021 Twitter shared with the Stanford Internet Observatory accounts and tweets associated with four distinct takedowns. These datasets were made public today. They include:
An anti-Azerbaijan/pro-Armenia network that had ties to the government of Armenia: 35 accounts and 72,960 tweets.
Iran: 238 accounts and 560,571 tweets that originated in Iran. Twitter announced the removal of 130 of these accounts on September 30, 2020, and has removed 238 in total as part of a larger investigation.
Russia Network 1: 69 accounts and 26,762 tweets that Twitter says are “reliably tied to Russian state actors”.
Russia Network 2: 31 accounts and 68,914 tweets that Twitter links to a mix of Russian state actors, and actors that may be affiliated with individuals linked to the Internet Research Agency.
In this post we summarize our analysis of these operations. We have also written three in-depth whitepapers, one on the anti-Azerbaijan operation, one on the Iran operation, and one on both of the Russia operations, linked at the top of the page.
This network, which advanced narratives critical of Azerbaijan and favorable toward Armenia, included 72,960 tweets dating back to 2014. Accounts quoted Azerbaijani state messaging with intermittent pro-Armenian messaging in an attempt to masquerade as Azerbaijani accounts.
The most noteworthy tactic in this network was the creation of accounts pretending to be Azerbaijani government officials. One of these accounts was created in 2014, and changed its handle in 2020 to impersonate the Minister of Foreign Affairs. This is not the first time this tactic has been used. In October 2020 Twitter announced the suspension of a network of accounts linked to the government of Saudi Arabia that created accounts pretending to be an interim Qatari government in exile. One of these accounts, @QtrGov, was created in 2016 and had over 90,000 followers, and there is strong evidence that it did not use this handle prior to 2020. By combining an old creation date and handle switching, information operations can create the impression of account legitimacy. It is possible that @QtrGov used spammy follow-back behavior to grow its following, then wiped its tweets and changed its handle.
On the one hand, there are reasons to not be too worried about this tactic. In both operations, Twitter users called out the accounts as fake. And for the operation described in this report, the fake government accounts got at most a few hundred followers. At the same time, this tactic has potential to mislead people. Several of the government officials impersonated in this network lacked their own official Twitter accounts, creating a search vacuum. In an example described in the report, a Google Knowledge Panel for one of these government officials linked to the fake Twitter account.
This network created several sockpuppet accounts impersonating Azerbaijani government officials to push contentious messaging empathetic to the Armenian perspective in the Nagorno-Karabakh conflict. In one case, the impersonation dated back to 2016.
Accounts in this network increased activity around several flash points in the ongoing conflict between Armenia and Azerbaijan over the Nagorno-Karabakh region. The accounts used a mix of Azerbaijani propaganda and Armenian propaganda, posting under Azerbaijani account names, to mock Azerbaijani figures or highlight negative actions by the Azerbaijani government.
Accounts used common Azerbaijani hashtags such as #JusticeforKhojaly and #StopArmenianAggression, but coupled them with Armenian hashtags like #Artzakh (the Armenian name for the disputed republic).
Accounts posing as news sites primarily shared Azerbaijani news articles through RSS feeds, but periodically tweeted original content using different Twitter clients. These original tweets were largely pro-Armenian.
We also observed a specific, low-engagement astroturfing campaign in 2019 that targeted the anniversary of the Khojaly Tragedy.
On September 30, 2020, based on a tip from the FBI, Twitter announced the removal of 130 accounts originating in Iran that were “attempting to disrupt the public conversation during the first 2020 US Presidential Debate.” On February 23, 2020, Twitter announced that “after the final investigation was complete” they suspended a total of 238 accounts operating from Iran for “violations of various platform manipulation policies.” Twitter shared the full 238 accounts with the Stanford Internet Observatory on February 12, 2020. The now-suspended accounts produced 560,571 tweets, with the earliest activity dating back to 2009. The majority of accounts, however, were created between May and October 2020 and were shortly thereafter removed by Twitter.
Tweets in the takedown were primarily in English, Spanish, Indonesian and Farsi. A smaller amount of content was in Arabic and French. The bulk of the accounts fell into an English-language cluster; they claimed to be Americans and shared divisive content related to the 2020 election and American politics. Most of the purported Americans claimed to be members of the #Resistance against Donald Trump. A few claimed to be Trump supporters. These accounts did not have a significant impact on broader discourse during the September 29, 2020 presidential debate, but a few of the #Resistance accounts managed to get traction on earlier tweets.
The majority of accounts in the takedown purported to be Americans and assumed clearly partisan identities. Most of these accounts claimed to be members of the anti-Trump #Resistance and tweeted divisive content about the president. A few of these accounts had tweets cited in various online outlets, primarily when linking to content from other sources. A smaller number of accounts, with less overall engagement and lower levels of sophistication, claimed to be Trump supporters. The median engagement with tweets from both #Resist and pro-Trump groups was 0 engagements, even during the first 2020 US Presidential debate.
Many of the purported American accounts commented on mainstream media posts; accounts often tweeted content related to the Middle East or US foreign policy on unrelated CNN posts, likely as a way to gain visibility for their causes.
Accounts that tweeted in both Indonesian and English plagiarized from various news articles and other Twitter accounts, and used mass hashtag amplification to publicize hashtags related to Palestinian advocacy. This activity clustered around key events such as Nakba and Quds Day (May 2020), and the UAE-Israel normalization (August 2020). Some of these accounts also expressed support for the Islamic Republic of Iran explicitly.
Spanish-language content was driven by a single account: @Hispantv. This was the official Twitter handle for HispanTV, a Spanish-language arm of Iranian state media that was launched in 2012. HispanTV tweeted over 300,000 times. We encourage media scholars to further analyze its content and removal from Twitter.
Twitter suspended two distinct Russia networks. The first network, which we will call Network 1, "can be reliably tied to Russian state actors," according to Twitter. It consisted of two types of accounts: accounts that claimed to be located in Syria and accounts that spread anti-NATO messaging. Many of these accounts were sockpuppets, claiming to be individuals that did not exist, or fake media fronts. Accounts tweeted in English, Russian, and Arabic; many were nominally multilingual. A handful of the fake accounts in this set have already been noted in a September 2020 Graphika report into a Facebook takedown of accounts attributed to the Russian military.
The second network, which we will call Network 2, consisted of 31 accounts created between July 2009 and October 2020. While we are using the term "Network 2" to differentiate the data set, it appears to consist of scattered collections of accounts tied to previously-unearthed operations on other social media platforms, including Facebook. Twitter says this network "show signs of being affiliated with the Internet Research Agency (IRA) and Russian government-linked actors." Subjects included international affairs of interest to the Russian government, and a cluster with a specific focus on Turkey and disputing the Armenian genocide. The accounts appear to have been linked to the operations primarily via technical indicators rather than amplification or conversation between them. Two profiles, belonging to an American activist and a Russian academic, were definitively real people; we do not have sufficient visibility into the technical indicators that led to their inclusion in the network and thus do not include them in our discussion. Several of the accounts in Network 2 were tied to quasi-think tank media properties -- a favored legitimization tactic leveraged by multiple information operations actors within the Russian intelligence services and additionally by mercenary social media operators linked to Yevgeny Prigozhin (such as the IRA). We discuss those accounts, and how this tactic has been repeatedly leveraged by multiple entities within the Russian intelligence services and adjacent actors, in this report.
The two Russian networks reinforce earlier findings about the centrality of fake media outlets and quasi-think tank properties to Russian disinformation campaigns. This is a favored legitimization tactic leveraged by multiple information operations actors within the Russian intelligence services.
Network 1: As Twitter, Facebook, and Medium continue to chip away at the presence of Russian government-linked fake accounts pushing Russia-aligned narratives about Syria and NATO, we find that the activity persists on Telegram and LiveJournal.
On March 11, 2020 Twitter shared with the Stanford Internet Observatory accounts and