The social network Parler was founded in 2018 at a time when large social media companies such as Facebook and Twitter had begun to more aggressively moderate accounts whose content and behavior violated aspects of their terms of service. Parler marketed itself as an alternative, framing its offering as a free-speech focused network that would not moderate, fact-check, or take down any posts. It quickly came to be known as an “alt-platform” (a small social platform) catering to the American right, a label it embraced.
Parler began to receive extensive news coverage during the 2020 election that described it as a place where Donald Trump supporters had gathered in response to perceived anti-conservative bias on mainstream social media platforms. Parler has attracted further attention after the election as it appeared to become something of an echo chamber within which claims that the election had been “stolen” from incumbent candidate Donald Trump gained a significant following; these claims included numerous unfounded conspiratorial allegations of deliberate theft and plotting that were fact-checked or otherwise moderated elsewhere. Additionally, it served as a central coordination platform for individuals who stormed the U.S. Capitol on January 6, 2021— numerous FBI investigations have included examples of planning and incitement that took place on Parler in the days leading to the riot. Following public outcries and internal investigations, various service providers took action against Parler, which had the effect of taking it offline, at least for the time being.
The Stanford Internet Observatory team began to study dynamics on Parler during our work as part of the Election Integrity Partnership, which focused on assessing voting-related misinformation. While much of our research focused on cross-platform spread of particular hashtags, we also undertook a study of account creation and growth dynamics on Parler, and we discuss some preliminary findings in this report. These include novel finds related to moderation — of interest given that lack of moderation is one of the central claims in Amazon’s decision to cease hosting the app — as well as engagement and posting statistics, and account creation dynamics.
- Parler’s moderation policies indicated that they primarily moderate based on user reports, rather than proactive mechanisms. Based on the user profile data, it appears that as of Jan. 9 there were 802 moderators for an estimated 13 million users.
- Many of the most active Parler accounts used integrations, such as RSS feeds, to automate content posting to the platform.
- Networks of fake accounts on Parler were designed to promote commercial off-site content, such as Trump coin scams and OnlyFans profiles.
- Parler’s user growth trends show jumps in response to political events in the U.S. and the choice by other platforms to label or remove content from prominent individuals, including President Trump.
- Several distinct account creation peaks on Parler attracted users from Brazil and Saudi Arabia, plus Qanon accounts in Japanese, largely in response to increased content labeling and removal on Twitter.