On February 1-2, 2019, Stanford’s Global Digital Policy Incubator (GDPi) , ARTICLE 19, and David Kaye, UN Special Rapporteur on the Right to Freedom of Opinion and Expression, convened an international working meeting to discuss a solution to address the challenges posed by online content moderation: the creation of multistakeholder, social media councils (SMCs). Our report, “Social Media Councils: From Concept to Reality," explores the outcomes of this meeting and discusses the next steps for social media councils.
In the past decade, social media and other online platforms have rapidly become some of the most important spaces for people to express themselves and share information. Billions of people around the world use these platforms for all forms of expression, from sharing baby pictures with family friends to organizing anti-government protests. These same spaces, however, are also used to incite violence and racial hatred, to recruit people to terrorist organizations, and to intentionally spread disinformation about important issues like elections or medicine. For this reason, moderating content online has come to the fore as a key challenge for increasingly digitized societies.
This is a particularly challenging problem because the spaces where content is created and posted are owned by private companies, which have to this point made decisions about content based on their community guidelines (CGs) or terms of service (TOS). Yet, as use of these platforms has skyrocketed, they have increasingly become a key component of the public square – the place where people share opinions, ideas, goods, services, and so much more. This development has spurred the need for methods to moderate content that comply with the standards applied to public speech.
Government regulation of platforms is one solution, but as with any situation in which governments regulate speech, this brings up serious free speech concerns, and early attempts at regulation have often put platforms in the position of being the enforcers of criminal laws that restrict content. How can we best balance the responsibility to protect free speech online with the need to prevent harmful effects, while factoring in the challenges posed by the private ownership of the digital spaces where these forms of speech are appearing?
Our working meeting at Stanford in February aimed to tackle this set of challenges. ARTICLE 19’s original proposal recommended the creation of councils at the national level that would serve as an appeals body for content moderation decisions made by platforms. These national councils would all be governed by a global code of principles grounded in international human rights standards, but these principles would be applied within a local context. Moreover, the national councils would all be linked through a global association of councils that would set best practices in relation to the principles and work of the councils.
The conversations that took place over two days brought up a number of challenges to this model, but also many converging ideas, particularly on the validity and potential benefits of the model itself. This report is an effort to synthesize what was learned during the meeting about the viability of the SMC model, and to provide insight to help move the concept forward. We will begin by discussing the value of a multistakeholder approach to content moderation online, and why this type of model is an important step towards addressing the challenges posed by online content. We will then highlight the areas of convergence from our discussions and explore some of the most critical outstanding questions in depth, before suggesting opportunities for next steps in the development of SMCs.