POSTPONED: Democratic Discord: When Electoral Democracy Creates Social Conflict
Abstract:
Abstract:
Countries retreating into closed systems and deciding to protect only their own groups could prevent international cooperation on climate change issues which is the only way to avert climate catastrophe, says Francis Fukuyama in conversation with Ana Kasparian. Watch here.
"Democrats don’t need to peddle in falsehood or invective to find lively and creative ways to communicate their message of hope, inspiration, and concrete policy alternatives, and to do so with passion and conviction," writes Larry Diamond in The American Interest. Read here.
Abstract: The enormous financial success of online advertising platforms is in large part due to the advanced targeting features they offer. WiSE Gabilan Assistant Professor of Computer Science at USC, Aleksandra Korolova will discuss recent findings showing how implementations of targeted advertising create new societal concerns related to privacy, manipulation of the vulnerable, and discrimination. Furthermore, Korolova will demonstrate that the ad delivery optimization algorithms run by the platforms can lead to skew in delivery along gender and racial lines, even when such skew was not intended by the advertiser. Korolova will conclude by introducing a new fairness notion, preference-informed fairness, that could serve as a novel step towards formally studying fairness in scenarios such as targeted advertising, where individuals have complex and diverse preferences over possible outcomes.
Based on joint work with I. Faizullabhoy (ConPro 2018), M. Ali, P. Sapiezynski, M. Bogen, A. Mislove, A. Rieke (CSCW 2019), and M. P. Kim, G. Rothblum, G. Yona (ITCS 2020, FAT* 2020).
Seminar Recording: https://youtu.be/Se8UcB6HFNo
About this Event: Based on his recent experience in Kyiv, Ambassador Taylor will evaluate current US policy toward Ukraine and make recommendations for future initiatives. He will argue that now is the time to re-engage with Ukraine to strengthen US-Ukrainian relations and boost US security. He will address the two main threats to the Zelenskyy administration — the Kremlin and corrupt oligarchs.
About the Speaker:
Ambassador William B. Taylor served as the Chargé d'Affaires at the US embassy in Kyiv from June 2019 to January 2020. Previously, he served as the executive vice president at the U.S. Institute of Peace and the special coordinator for Middle East Transitions in the U.S. State Department during the Arab Spring. He served as the U.S. ambassador to Ukraine from 2006 to 2009.
He also served as the U.S. government’s representative to the Mideast Quartet, which facilitated the Israeli disengagement from Gaza and parts of the West Bank, led by Special Envoy James Wolfensohn in Jerusalem. Prior to this assignment, he served in Baghdad as Director, Iraq Reconstruction Management Office (2004-2005), in Kabul as coordinator of USG and international assistance to Afghanistan (2002-2003) and in Washington with the rank of ambassador as coordinator of USG assistance to the former Soviet Union and Eastern Europe (1992-2002).
Ambassador Taylor spent five years in Brussels as the Special Deputy Defense Advisor to the U.S. Ambassador to NATO, William Taft and earlier directed an in-house Defense Department think tank at Fort McNair in Washington, D.C. He served for five years on the staff of Senator Bill Bradley and earlier directed the Department of Energy’s Office of Emergency Preparedness.
In the Army, he fought in Vietnam as a rifle platoon leader and combat company commander in the 101st Airborne Division and flew reconnaissance missions along the West German border with Czechoslovakia in the 2nd Armored Cavalry Regiment.
The Shorenstein Asia-Pacific Research Center cordially invites its faculty, scholars, staff, affiliates, and their families to join APARC's first International Potluck Day! Join us to celebrate the diversity of APARC through a multicultural smorgasbord of food. Bring a dish from your home country or family heritage to share with the APARC community as we take the time to mix, mingle, and celebrate the diversity that makes APARC special.
"Freedom is inseparable from human dignity," says LarryDiamond for Bertelsmann Foundation talks on "How to Fix Democracy." The crisis is “bad, deepening, accelerating,” but he suggests several steps we can take to reverse the trend, such as ranked-choice voting to tackle the two-party system, and spreading “motor voter” laws to increase the number of registered voters. Watch the video here.
The research on misinformation generally and fake news specifically is vast, as is coverage in media outlets. Two questions run throughout both the academic and public discourse: what explains the spread of fake news online, and what can be done about it? While there is substantial literature on who is likely to be exposed to and share fake news, these behaviors might not signal belief or effect. Conversely, there is far less work on who is able to differentiate between true and false stories and, as a result, who is most likely to believe fake news (or, conversely, not believe true news), a question that speaks directly to Facebook’s recent “community review” approach to combating the spread of fake news on its platform.
In his talk, Professor Tucker will report on initial findings from a new collaborative project between NYU’s Center for Social Media and Politics and Stanford’s Program on Democracy and the Internet designed to fill these gaps in the scholarly literature and inform the types of policy decisions being made by Facebook. The project has enlisted both professional fact checkers and random “crowds” of close to 100 people to fact check five “fresh” articles (that have appeared in the past 24 hours) per day, four days a week, for eights week using an innovative transparent and replicable algorithm for selecting the articles for fact checking. He will report on initial observations regarding (a) individual determinants of fact checking proficiency; (b) the viability using the “wisdom of the crowds” for fact checking, including examining the tradeoffs between crafting a more accurate crowd vs. a more representative crowd and (c) results from experiments designed to assess potential policy interventions to improve crowdsourcing accuracy.
About the Speaker:
A Q&A with Professor Stephen Stedman, who serves as the Secretary General of the Kofi Annan Commission on Elections and Democracy in the Digital Age.
At the World Economic Forum in Davos, Switzerland, the Commission which includes FSI’s Nathaniel Persily, Alex Stamos, and Toomas Ilves, launched a new report, Protecting Electoral Integrity in the Digital Age. The report takes an in-depth look at the challenges faced by democracy today and makes a number of recommendations as to how best to tackle the threats posed by social media to free and fair elections. On Tuesday, February 25, professors Stedman and Persily will discuss the report’s findings and recommendations during a lunch seminar from 12-1:15 PM. To learn more and to RSVP, visit the event page.
Q: What are some of the major findings of the report? Are digital technologies a threat to democracy?
Steve Stedman: Our report suggests that social media and the Internet pose an acute threat to democracy, but probably not in the way that most people assume. Many people believe that the problem is a diffuse one based on excess disinformation and a decline in the ability of citizens to agree on facts. We too would like the quality of deliberation in our democracy to improve and we worry about how social media might degrade democratic debate, but if we are talking about existential threats to democracy the problem is that digital technologies can be weaponized to undermine the integrity of elections.
When we started our work, we were struck by how many pathologies of democracy are said to be caused by social media: political polarization; distrust in fellow citizens, government institutions and traditional media; the decline in political parties; democratic deliberation, and on and on. Social media is said to lessen the quality of democracy because it encourages echo chambers and filter bubbles where we only interact with those who share our political beliefs. Some platforms are said to encourage extremism through their algorithms.
What we found, instead, is a much more complex problem. Many of the pathologies that social media are said to create – for instance, polarization, distrust, and political sorting begin their trendlines before the invention of the Internet, let alone the smart phone. Some of the most prominent claims are unsupported by evidence, or are confounded by conflicting evidence. In fact, we say that some assertions simply cannot be judged without access to data held by the tech platforms.
Instead, we rely on the work of scholars like Yochai Benkler and Edda Humphries to argue that not all democracies are equally vulnerable to network propaganda and disinformation. It is precisely where you have high pre-existing affective polarization, low trust, and hyperpartisan media, that digital technologies can intensify and amplify polarization.
Elections and toxic polarization are a volatile mix. Weaponized disinformation and hate speech can wreak havoc on elections, even if they don’t alter the vote tallies. This is because democracies require a system of mutual security. In established democracies political candidates and followers take it for granted that if they lose an election, they will be free to organize and contest future elections. They are confident that the winners will not use their power to eliminate them or disenfranchise them. Winners have the expectation that they hold power temporarily, and accept that they cannot change the rules of competition to stay in power forever. In short, mutual security is a set of beliefs and norms that turn elections from being a one-shot game into a repeated game with a long shadow of the future.
In a situation already marred by toxic polarization, we fear that weaponized disinformation and hate speech can cause parties and followers to believe that the other side doesn’t believe in the rules of mutual security. The stakes become higher. Followers begin to believe that losing an election means losing forever. The temptation to cheat and use violence increases dramatically.
Q: As far as political advertising, the report encourages platforms to provide more transparency about who is funding that advertising. But it also asks that platforms require candidates to make a pledge that they will avoid deceptive campaign practices when purchasing ads. It also goes as far as to recommend financial penalties for a platform if, for example, a bot spreading information is not labelled as such. Some platforms might argue that this puts an unfair onus on them. How might platforms be encouraged to participate in this effort?
SS: The platforms have a choice: they can contribute to toxic levels of political polarization and the degradation of democratic deliberation, or they can protect electoral integrity and democracy. There are a lot of employees of the platforms who are alarmed at the state of polarization in this country and don’t want their products to be conduits of weaponized disinformation and hate speech. You saw this in the letter signed by Facebook employees objecting to the decision by Mark Zuckerberg that Facebook would treat political advertising as largely exempt from their community standards. If ever there were a moment in this country that we should demand that our political parties and candidates live up to a higher ethical standard it is now. Instead Facebook decided to allow political candidates to pay to run ads even if the ads use disinformation, tell bald-faced lies, engage in hate speech, and use doctored video and audio. Their rationale is that this is all part of “the rough and tumble of politics.” In doing so, Facebook is in the contradictory position that it has hundreds of employees working to stop disinformation and hate speech in elections in Brazil and India, but is going to allow politicians and parties in the United States to buy ads that can use disinformation and hate speech.
Our recommendation gives Facebook an option that allows political advertisement in a way that need not enflame polarization and destroy mutual security among candidates and followers: 1.) Require that candidates, groups or parties who want to pay for political advertising on Facebook sign a pledge of ethical digital practices; 2.) Then use the standards to determine if an ad meets the pledge or not. If an ad uses deep fakes, if an ad grotesquely distorts the facts, if an ad out and out lies about what an opponent said or did, then Facebook would not accept the ad. Facebook can either help us raise our electoral politics out of the sewer or it can ensure that our politics drowns in it.
It’s worth pointing out that the platforms are only one actor in a many-sided problem. Weaponized disinformation is actively spread by unscrupulous politicians and parties; it is used by foreign countries to undermine electoral integrity; and it is often spread and amplified by irresponsible partisan traditional media. Fox News, for example, ran the crazy conspiracy story about Hilary Clinton running a pedophile ring out of a pizza parlor in DC. Individuals around the president, including the son of the first National Security Adviser tweeted the story.
Q: While many of the recommendations focus on the role of platforms and governments, the report also proposes that public authorities promote digital and media literacy in schools as well as public interest programming for the general population. What might that look like? And how would that type of literacy help protect democracy?
SS: Our report recommends digital literacy programs as a means to help build democratic resilience against weaponized disinformation. Having said that however, the details matter tremendously. Sam Wineburg at Stanford, who we cite, has extremely insightful ideas for how to teach citizens to evaluate the information they see on the Internet, but even he puts forward warnings: if done poorly digital literacy could simply increase citizen distrust of all media, good and bad; digital literacy in a highly polarized context begs the question of who will decide what is good and bad media. We say in passing that in addition to digital literacy we need to train citizens to understand biased assimilation of information. Digital literacy trains citizens to understand who is behind a piece of information and who benefits from it. But we also need to teach citizens to stand back and ask, “why am I predisposed to want to believe this piece of information?”
Q: Obviously access to data is critical for researchers and commissioners to do their work, analysis and reporting. One of the recommendations asks that public authorities compel major internet platforms to share meaningful data with academic institutions. Why is it so important for platforms and academia to share information?
SS: Some of the most important claims about the effects of social media can’t be evaluated without access to the data. One example we cite in the report is the controversy about whether YouTube’s algorithms radicalize individuals and send them down a rabbit hole of racist, nationalist content. This is a common claim and has appeared on the front pages of the New York Times. The research supporting the claim, however, is extremely thin, and other research disputes it. What we say is that we can’t adjudicate this argument unless YouTube were to share its data, so that researchers can see what the algorithm is doing. There are similar debates concerning the effects of Facebook. One of our commissioners, Nate Persily, has been at the forefront of working with Facebook to provide certified researchers with privacy protected data – Social Science One. Progress has been so slow that the researchers have lost patience. We hope that governments can step in and compel the platforms to share the data.
Q: This is one of the first reports to look at this problem in the Global South. Is the problem more or less critical there?
SS: Kofi Annan was very concerned that the debate about digital technologies and democracy was far too focused on Europe and the United States. Before Cambridge Analytica’s involvement in the United States and Brexit elections of 2016, its predecessor company had manipulated elections in Asia, Africa and the Caribbean. There is now a transnational industry in election manipulation.
What we found does not bode well for democracies in the rest of the world. The factors that make democracies vulnerable to network propaganda and weaponized disinformation are often present in the Global South: pre-existing polarization, low trust, and hyperpartisan traditional media. Many of these democracies already have a repertoire of electoral violence.
On the other hand, we did find innovative partnerships in Indonesia and Mexico where Election Management Bodies, civil society organizations, and traditional media cooperated to fight disinformation during elections, often with success. An important recommendation of the report is that greater attention and resources are needed for such efforts to protect electoral integrity in the Global South.
About the Commission on Elections and Democracy in the Digital Age
As one of his last major initiatives, in 2018 Kofi Annan convened the Commission on Elections and Democracy in the Digital Age. The Commission includes members from civil society and government, the technology sector, academia and media; across the year 2019 they examined and reviewed the opportunities and challenges for electoral integrity created by technological innovations. Assisted by a small secretariat at Stanford University and the Kofi Annan Foundation, the Commission has undertaken extensive consultations and issue recommendations as to how new technologies, social media platforms and communication tools can be harnessed to engage, empower and educate voters, and to strengthen the integrity of elections. Visit the Kofi Annan Foundation and the Commission on Elections and Democracy in the Digital Age for more on their work.
Seminar Recording: https://youtu.be/yIthWPC99bI
About this Event: Since the United States left the Iran nuclear deal in May 2018, the Trump administration has pursued a maximum economic pressure campaign toward Iran. The U.S. use of sanctions has gone far beyond what previous administrations have done to try to change Iran's policies, targeting large swathes of the Iranian economy, high-ranking Iranian government officials, and threatening other countries if they do not curtail their own private sector's activities with Iran. The economic consequences of these measures, particularly for Iran's domestic economy, Iran's ability to procure food and medicine from abroad, and for Iran's flagship energy industry, have been profoundly disruptive. The U.S. economic pressure strategy has also had direct impacts on the global shipping and energy industries. To better understand the impacts of the current U.S. strategy toward Iran, Elizabeth Rosenberg will discuss how the Trump administration has used unprecedented economic coercion, and how U.S. partners and adversaries have responded. She will focus on what role sanctions are likely to play going forward and whether they will be used now as a form of deescalation or escalation in U.S.-Iran tensions, which are particularly heightened following the U.S. killing of Qods Force commander Qasem Soleimani.
About the Speaker: Elizabeth Rosenberg is a Senior Fellow and Director of the Energy, Economics, and Security Program at the Center for a New American Security. In this capacity, she publishes and speaks on the national security and foreign policy implications of the use of sanctions and economic statecraft as well as energy market shifts. Current geographic areas of focus include Iran, Russia, China, North Korea, and Venezuela. She has testified before Congress on an array of banking and trade issues, and on energy geopolitics and markets topics. She is widely quoted by leading media outlets in the United States and abroad.
From May 2009 through September 2013, Ms. Rosenberg served as a Senior Advisor at the U.S. Department of the Treasury, to the Assistant Secretary for Terrorist Financing and Financial Crimes, and then to the Under Secretary for Terrorism and Financial Intelligence. In these senior roles, she helped to develop and implement financial and energy sanctions. Key initiatives she helped to oversee include the tightening of global sanctions on Iran, the launching of new, comprehensive sanctions against Libya and Syria and modification of Burma sanctions in step with normalization of diplomatic relations. She also helped to formulate anti-money laundering and counter-terrorist and counter-proliferation financing policy and oversee financial regulatory enforcement activities.
Prior to her service in the U.S. government Ms. Rosenberg was an energy policy correspondent at Argus Media in Washington D.C., analyzing U.S and Middle Eastern energy policy, regulation and trading. She spoke and published extensively on OPEC, strategic reserves, energy sanctions and national security policy, oil and natural gas investment and production, and renewable fuels.
Ms. Rosenberg received an MA in Near Eastern Studies from New York University and a BA in Politics and Religion from Oberlin College.
Outside CNAS, Elizabeth Rosenberg is providing exclusive advice on foreign policy and national security as an informal advisor to the Elizabeth Warren campaign.