Stanford Experts Publish Recommendations to Safeguard Democratic Elections Around the World
Stanford Experts Publish Recommendations to Safeguard Democratic Elections Around the World
A Q&A with Professor Stephen Stedman, who serves as the Secretary General of the Kofi Annan Commission on Elections and Democracy in the Digital Age.
At the World Economic Forum in Davos, Switzerland, the Commission which includes FSI’s Nathaniel Persily, Alex Stamos, and Toomas Ilves, launched a new report, Protecting Electoral Integrity in the Digital Age. The report takes an in-depth look at the challenges faced by democracy today and makes a number of recommendations as to how best to tackle the threats posed by social media to free and fair elections. On Tuesday, February 25, professors Stedman and Persily will discuss the report’s findings and recommendations during a lunch seminar from 12-1:15 PM. To learn more and to RSVP, visit the event page.
Q: What are some of the major findings of the report? Are digital technologies a threat to democracy?
Steve Stedman: Our report suggests that social media and the Internet pose an acute threat to democracy, but probably not in the way that most people assume. Many people believe that the problem is a diffuse one based on excess disinformation and a decline in the ability of citizens to agree on facts. We too would like the quality of deliberation in our democracy to improve and we worry about how social media might degrade democratic debate, but if we are talking about existential threats to democracy the problem is that digital technologies can be weaponized to undermine the integrity of elections.
When we started our work, we were struck by how many pathologies of democracy are said to be caused by social media: political polarization; distrust in fellow citizens, government institutions and traditional media; the decline in political parties; democratic deliberation, and on and on. Social media is said to lessen the quality of democracy because it encourages echo chambers and filter bubbles where we only interact with those who share our political beliefs. Some platforms are said to encourage extremism through their algorithms.
What we found, instead, is a much more complex problem. Many of the pathologies that social media are said to create – for instance, polarization, distrust, and political sorting begin their trendlines before the invention of the Internet, let alone the smart phone. Some of the most prominent claims are unsupported by evidence, or are confounded by conflicting evidence. In fact, we say that some assertions simply cannot be judged without access to data held by the tech platforms.
Instead, we rely on the work of scholars like Yochai Benkler and Edda Humphries to argue that not all democracies are equally vulnerable to network propaganda and disinformation. It is precisely where you have high pre-existing affective polarization, low trust, and hyperpartisan media, that digital technologies can intensify and amplify polarization.
Elections and toxic polarization are a volatile mix. Weaponized disinformation and hate speech can wreak havoc on elections, even if they don’t alter the vote tallies. This is because democracies require a system of mutual security. In established democracies political candidates and followers take it for granted that if they lose an election, they will be free to organize and contest future elections. They are confident that the winners will not use their power to eliminate them or disenfranchise them. Winners have the expectation that they hold power temporarily, and accept that they cannot change the rules of competition to stay in power forever. In short, mutual security is a set of beliefs and norms that turn elections from being a one-shot game into a repeated game with a long shadow of the future.
In a situation already marred by toxic polarization, we fear that weaponized disinformation and hate speech can cause parties and followers to believe that the other side doesn’t believe in the rules of mutual security. The stakes become higher. Followers begin to believe that losing an election means losing forever. The temptation to cheat and use violence increases dramatically.
Q: As far as political advertising, the report encourages platforms to provide more transparency about who is funding that advertising. But it also asks that platforms require candidates to make a pledge that they will avoid deceptive campaign practices when purchasing ads. It also goes as far as to recommend financial penalties for a platform if, for example, a bot spreading information is not labelled as such. Some platforms might argue that this puts an unfair onus on them. How might platforms be encouraged to participate in this effort?
SS: The platforms have a choice: they can contribute to toxic levels of political polarization and the degradation of democratic deliberation, or they can protect electoral integrity and democracy. There are a lot of employees of the platforms who are alarmed at the state of polarization in this country and don’t want their products to be conduits of weaponized disinformation and hate speech. You saw this in the letter signed by Facebook employees objecting to the decision by Mark Zuckerberg that Facebook would treat political advertising as largely exempt from their community standards. If ever there were a moment in this country that we should demand that our political parties and candidates live up to a higher ethical standard it is now. Instead Facebook decided to allow political candidates to pay to run ads even if the ads use disinformation, tell bald-faced lies, engage in hate speech, and use doctored video and audio. Their rationale is that this is all part of “the rough and tumble of politics.” In doing so, Facebook is in the contradictory position that it has hundreds of employees working to stop disinformation and hate speech in elections in Brazil and India, but is going to allow politicians and parties in the United States to buy ads that can use disinformation and hate speech.
Our recommendation gives Facebook an option that allows political advertisement in a way that need not enflame polarization and destroy mutual security among candidates and followers: 1.) Require that candidates, groups or parties who want to pay for political advertising on Facebook sign a pledge of ethical digital practices; 2.) Then use the standards to determine if an ad meets the pledge or not. If an ad uses deep fakes, if an ad grotesquely distorts the facts, if an ad out and out lies about what an opponent said or did, then Facebook would not accept the ad. Facebook can either help us raise our electoral politics out of the sewer or it can ensure that our politics drowns in it.
It’s worth pointing out that the platforms are only one actor in a many-sided problem. Weaponized disinformation is actively spread by unscrupulous politicians and parties; it is used by foreign countries to undermine electoral integrity; and it is often spread and amplified by irresponsible partisan traditional media. Fox News, for example, ran the crazy conspiracy story about Hilary Clinton running a pedophile ring out of a pizza parlor in DC. Individuals around the president, including the son of the first National Security Adviser tweeted the story.
Q: While many of the recommendations focus on the role of platforms and governments, the report also proposes that public authorities promote digital and media literacy in schools as well as public interest programming for the general population. What might that look like? And how would that type of literacy help protect democracy?
SS: Our report recommends digital literacy programs as a means to help build democratic resilience against weaponized disinformation. Having said that however, the details matter tremendously. Sam Wineburg at Stanford, who we cite, has extremely insightful ideas for how to teach citizens to evaluate the information they see on the Internet, but even he puts forward warnings: if done poorly digital literacy could simply increase citizen distrust of all media, good and bad; digital literacy in a highly polarized context begs the question of who will decide what is good and bad media. We say in passing that in addition to digital literacy we need to train citizens to understand biased assimilation of information. Digital literacy trains citizens to understand who is behind a piece of information and who benefits from it. But we also need to teach citizens to stand back and ask, “why am I predisposed to want to believe this piece of information?”
Q: Obviously access to data is critical for researchers and commissioners to do their work, analysis and reporting. One of the recommendations asks that public authorities compel major internet platforms to share meaningful data with academic institutions. Why is it so important for platforms and academia to share information?
SS: Some of the most important claims about the effects of social media can’t be evaluated without access to the data. One example we cite in the report is the controversy about whether YouTube’s algorithms radicalize individuals and send them down a rabbit hole of racist, nationalist content. This is a common claim and has appeared on the front pages of the New York Times. The research supporting the claim, however, is extremely thin, and other research disputes it. What we say is that we can’t adjudicate this argument unless YouTube were to share its data, so that researchers can see what the algorithm is doing. There are similar debates concerning the effects of Facebook. One of our commissioners, Nate Persily, has been at the forefront of working with Facebook to provide certified researchers with privacy protected data – Social Science One. Progress has been so slow that the researchers have lost patience. We hope that governments can step in and compel the platforms to share the data.
Q: This is one of the first reports to look at this problem in the Global South. Is the problem more or less critical there?
SS: Kofi Annan was very concerned that the debate about digital technologies and democracy was far too focused on Europe and the United States. Before Cambridge Analytica’s involvement in the United States and Brexit elections of 2016, its predecessor company had manipulated elections in Asia, Africa and the Caribbean. There is now a transnational industry in election manipulation.
What we found does not bode well for democracies in the rest of the world. The factors that make democracies vulnerable to network propaganda and weaponized disinformation are often present in the Global South: pre-existing polarization, low trust, and hyperpartisan traditional media. Many of these democracies already have a repertoire of electoral violence.
On the other hand, we did find innovative partnerships in Indonesia and Mexico where Election Management Bodies, civil society organizations, and traditional media cooperated to fight disinformation during elections, often with success. An important recommendation of the report is that greater attention and resources are needed for such efforts to protect electoral integrity in the Global South.
About the Commission on Elections and Democracy in the Digital Age
As one of his last major initiatives, in 2018 Kofi Annan convened the Commission on Elections and Democracy in the Digital Age. The Commission includes members from civil society and government, the technology sector, academia and media; across the year 2019 they examined and reviewed the opportunities and challenges for electoral integrity created by technological innovations. Assisted by a small secretariat at Stanford University and the Kofi Annan Foundation, the Commission has undertaken extensive consultations and issue recommendations as to how new technologies, social media platforms and communication tools can be harnessed to engage, empower and educate voters, and to strengthen the integrity of elections. Visit the Kofi Annan Foundation and the Commission on Elections and Democracy in the Digital Age for more on their work.