Takedown
Paragraphs

On August 4, 2022, Meta announced the removal of a network of Facebook and Instagram accounts that posted about Palestinian, Angolan, and Nigerian politics. This report groups findings by these three topical clusters, but we note that Meta found links between the entire network and one entity: Mind Force, an Israeli public relations firm, and there is no evidence that these were distinct operational clusters. The network included 42 Pages, nine groups, 259 profiles, and 107 Instagram accounts. Meta suspended the network not due to the content of the posts, but rather for coordinated inauthentic behavior. Meta shared a portion of this network’s activity with the Stanford Internet Observatory on June 13, 2022.

All Publications button
1
Publication Type
Case Studies
Publication Date
Authors
Renee DiResta
Shelby Grossman
Paragraphs

This report explores the narratives and tactics of two distinct China-linked datasets from Twitter’s December 2021 inauthentic network takedowns. The first network, which we will call CNHU, consisted of fake and coordinated accounts that boosted CCP narratives about Xinjiang and the minority Uyghur population, shared positive information about the local government’s efforts to fight COVID, and amplified Chinese state media. Twitter attributed the activity to the Chinese government generally. The second network, which we will call CNCC, also focused on communicating official CCP narratives about Xinjiang, with a particular focus on reaching international audiences through purported first-person video testimonials from Uyghur individuals describing their lives. This takedown was specifically attributed to an entity by the name of Changyu Culture, a private company acting on behalf of the government. This operation is unique in that it is the first China linked takedown attributed by Twitter to a specific private company within the country.

All Publications button
1
Publication Type
Case Studies
Publication Date
Journal Publisher
Stanford Internet Observatory
Authors
Renee DiResta
Josh A. Goldstein
Carly Miller
Harvey Wang
SY
MD
Paragraphs

On December 2, 2021, Twitter announced that they had suspended a network of 268 accounts that were supportive of the Tanzanian government and had used copyright reporting adversarially to target accounts belonging to Tanzanian activists. According to Twitter, many of the African personas used in this campaign were previously Russian personas, suggesting the operation may have been partially outsourced to a Russian-speaking country. The adversarial reporting in Tanzania was observed by Access Now in October 2020 and reported by the BBC in December 2020. Coordinated adversarial account reporting is not unique to Twitter; in August 2020 Facebook suspended a network of accounts in Pakistan with Facebook Pages and private Groups that coordinated the reporting of Facebook accounts critical of Islam and the Pakistani government and that leveraged a Chrome extension to report accounts in bulk. This Tanzanian operation worked by first taking text or images tweeted by accounts that criticized the ruling Chama Cha Mapinduzi (CCM) party, then putting the same content on WordPress sites and modifying the date to make it appear as if the WordPress post preceded the tweet. Fake accounts pretending to be Tanzanians or South Africans then reported the content to Twitter for violations of the US Digital Millennium Copyright Act (DMCA). Twitter then notified the accounts of the accusations. To counter such accusations, however, the accused must share their personal information. At least one of the targeted accounts relied on anonymity for safety, making it difficult to formally counter the attacks. This operation succeeded in getting Twitter to suspend at least two of the targeted accounts, though both have since been reinstated. Parts of the network served more simply to send harassing tweets to Tanzanian activists, the political opposition, and foreign media. These tweets often read as if they were written by a child, saying, for example, “you have an empty head,” or were simply a nonsensical series of letters and numbers.

All Publications button
1
Publication Type
Case Studies
Publication Date
Journal Publisher
Stanford Internet Observatory
Authors
Shelby Grossman
Christopher Giles
Cynthia N. M.
R. Miles McCain
Blair Read
Paragraphs

On December 2, 2021, Twitter announced that it had suspended a network of accounts that engaged in a political spam operation in support of the Venezuelan government. According to Twitter’s attribution language, real people were encouraged to engage in spammy behaviors to show their support for Nicolás Maduro and his political party. According to Twitter, financial compensation may have been offered to accounts for sufficient engagement in bolstering Maduro’s messaging. Our assessment of the accounts shared suggested that the set may be more accurately characterized as four or five distinct groups linked to each other only by mentions of common public figures or popular hashtags and by behavior that violates similar parts of Twitter’s policies. The network included accounts that reported locations in Venezuela, Colombia and Mexico, and engaged in automated tweeting behavior through the use of bots and feeds. The three regional groups were distinct and tweeted about different topics. In our assessment, we could not verify that these accounts were directly linked to tweet-for-hire schemes, although the Venezuelan accounts used behaviors described in prior reporting on this tactic. Other accounts in the network shared behavior similar to more commercial tweet-for-hire schemes: they promoted a mix of commercial brands and political hashtags. The accounts that reported their location as Mexico specifically engaged in behavior that amplified support for regional Mexican politicians. Shortly before the network was suspended, a small cluster of new accounts accounts furiously tweeted for the release of Alex Saab, a close ally of the Venezuelan president who was recently extradited from Cape Verde and is currently awaiting trial in Miami on charges of money laundering.

All Publications button
1
Publication Type
Case Studies
Publication Date
Journal Publisher
Stanford Internet Observatory
Authors
Ronald E. Robertson
Noah Schechter
Paragraphs

On December 2, 2021, Twitter announced that it had suspended a network of 276 accounts with ties to Mexico. Twitter stated that the network suspended contained inauthentic accounts that shared primarily civic content, in support of government initiatives related to public health and political parties. Twitter shared this network with the Stanford Internet Observatory on September 12, 2021. SIO’s analysis found that the network of accounts engaged in some level of coordinated posting, handle switching, and cheerleading for the Mexican president. Many of the accounts showed support for brands and entities under the umbrella of the Mexican conglomerate Grupo Salinas, which is owned by Ricardo Salinas Pliego, an ally of López Obrador. Those accounts trolled some of both Salinas Pliego and López Obrador’s opponents, and defended Grupo Salinas’s justifications for keeping stores open during lockdown. The network activity was concentrated in 2019 and 2020, and did not show clear ties to political candidates or races in Mexico’s 2021 midterm elections in our analysis, although we encourage further exploration.

READ FULL ARTICLE

All Publications button
1
Publication Type
Case Studies
Publication Date
Journal Publisher
Stanford Internet Observatory
Authors
Sean Gallagher
Paragraphs

On December 2, 2021 Twitter announced that they had suspended a network of 50 accounts linked to previously removed activity from the Internet Research Agency. The network focused on Central African Republic, Democratic Republic of Congo, Libya, Syria, Sudan, Mozambique, and Zimbabwe, and included a mix of accounts representing real people and fake accounts (at least one with an AI-generated profile photo). Twitter assesses that the operation originated in North Africa.1 The network was most notable for the high portion of accounts that had their tweets embedded in news articles from the Yevgeny Prigozhin-linked publication RIA FAN (“Federal News Agency”), in some cases the Russian state media outlet Sputnik, and a wider ecosystem of websites around the world. Social media embedding is a practice of incorporating public commentary into news articles that is widely leveraged by many credible publications worldwide, and leveraged to provide on-the-ground or “man-on-the-street” perspectives on pivotal issues. However, in the case of RIA FAN, what was embedded was commentary by way of tweets linked to inauthentic accounts from influence networks. 

READ FULL ARTICLE

All Publications button
1
Publication Type
Case Studies
Publication Date
Journal Publisher
Stanford Internet Observatory
Authors
Renee DiResta
Shelby Grossman
Karen Nershi
Khadeja Ramali
Rajeev Sharma
Subscribe to Takedown