-

Image
Social Media and Democracy book symposium

Please join the Cyber Policy Center for a discussion of Social Media and Democracy: The State of the Field and Prospects for Reform, a new book with chapters by scholars and faculty at the Cyber Policy Center. The book explores the emerging multi-disciplinary field of social media and democracy, by synthesizing what we know, identifying what we do not know and obstacles to future research, and charting a course for the future inquiry. Chapters by leading scholars cover major topics – from disinformation to hate speech to political advertising – and situate recent developments in the context of key policy questions. In addition, the book canvasses existing reform proposals in order to address widely perceived threats that social media poses to democracy. 

Please note that we will also have a YouTube livestream available for potential overflow or for anyone having issues connecting via Zoom: https://youtu.be/KXtMB-3DlHc

REGISTER

 

AGENDA subject to change, with Q&A integrated throughout

  • 9 a.m.: Introduction with Nathaniel Persily, James B. McClatchy Professor of Law at Stanford Law School and the Faculty Co-Director of the Stanford Cyber Policy Center and Joshua A. Tucker, Professor of Politics, affiliated Professor of Russian and Slavic Studies, and affiliated Professor of Data Science at New York University
  • 9:15 a.m.-10:30 a.m.
    • Misinformation, Disinformation, and Online Propaganda with Andrew M. Guess, Assistant Professor of Politics and Public Affairs at Princeton University.
    • Online Hate Speech with Alexandra A. Siegel, Assistant Professor of Political Science at the University of Colorado Boulder
    • Bots and Computational Propaganda: Automation for Communication and Control with Samuel C. Woolley, Assistant Professor at the School of Journalism at the University of Texas at Austin
    • Online Political Advertising in the United States with Travis N. Ridout, Thomas S. Foley Distinguished Professor of Government and Public Policy in the School of Politics, Philosophy and Public Affairs at Washington State University and Co-Director of the Wesleyan Media Project
  • 10:30 a.m.: 10 min break
  • 10:40 a.m - 11:40 a.m.: 
    • Democratic Creative Destruction? The Effect of a Changing Media Landscape on Democracy with Rasmus Kleis Nielsen, Director of the Reuters Institute for the Study of Journalism and Professor of Political Communication at the University of Oxford
    • Misinformation and Its Correction with Adam J. Berinksy, Mitsui Professor of Political Science at Massachusetts Institute of Technology (MIT) and Director of the MIT Political Experiments Research Lab

    • Comparative Media Regulation in the United States and Europe with Francis Fukuyama, Olivier Nomellini Senior Fellow at the Freeman Spogli Institute for International Studies and the Mosbacher Director of the Center on Democracy, Development, and the Rule of Law at Stanford University and Andrew Grotto, William J. Perry International Security Fellow at the Center for International Security and Cooperation, Research Fellow at the Hoover Institution, and Director of the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center

  • 11:40 a.m.: 10 min break
  • 11:50 a.m - 12:30 p.m.: 
    • Facts and Where to Find Them: Empirical Research on Internet Platforms and Content Moderation with Daphne Keller, Director of the Program on Platform Regulation at the Stanford Cyber Policy Center
    • Democratic Transparency in the Platform Society, with Robert Gorwa, doctoral student in the Department of Politics and International Relations at the University of Oxford
  • 12:30 p.m.Closing and final Q&A with Nathaniel Persily and Joshua A. Tucker

 

Adam J. Berinsky

Encina Hall, C148
616 Jane Stanford Way
Stanford, CA 94305

0
Olivier Nomellini Senior Fellow at the Freeman Spogli Institute for International Studies
Director of the Ford Dorsey Master's in International Policy
Research Affiliate at The Europe Center
Professor by Courtesy, Department of Political Science
yff-2021-14290_6500x4500_square.jpg

Francis Fukuyama is the Olivier Nomellini Senior Fellow at Stanford University's Freeman Spogli Institute for International Studies (FSI), and a faculty member of FSI's Center on Democracy, Development and the Rule of Law (CDDRL). He is also Director of Stanford's Ford Dorsey Master's in International Policy, and a professor (by courtesy) of Political Science.

Dr. Fukuyama has written widely on issues in development and international politics. His 1992 book, The End of History and the Last Man, has appeared in over twenty foreign editions. His book In the Realm of the Last Man: A Memoir will be published in fall 2026.

Francis Fukuyama received his B.A. from Cornell University in classics, and his Ph.D. from Harvard in Political Science. He was a member of the Political Science Department of the RAND Corporation, and of the Policy Planning Staff of the US Department of State. From 1996-2000 he was Omer L. and Nancy Hirst Professor of Public Policy at the School of Public Policy at George Mason University, and from 2001-2010 he was Bernard L. Schwartz Professor of International Political Economy at the Paul H. Nitze School of Advanced International Studies, Johns Hopkins University. He served as a member of the President’s Council on Bioethics from 2001-2004. He is editor-in-chief of American Purpose, an online journal.

Dr. Fukuyama holds honorary doctorates from Connecticut College, Doane College, Doshisha University (Japan), Kansai University (Japan), Aarhus University (Denmark), the Pardee Rand Graduate School, and Adam Mickiewicz University (Poland). He is a non-resident fellow at the Carnegie Endowment for International Peace. He is a member of the Board of Trustees of the Rand Corporation, the Board of Trustees of Freedom House, and the Board of the Volcker Alliance. He is a fellow of the National Academy for Public Administration, a member of the American Political Science Association, and of the Council on Foreign Relations. He is married to Laura Holmgren and has three children.

(October 2025)

CV
Date Label
Francis Fukuyama
Robert Gorwa
Andrew Guess

CISAC
Stanford University
Encina Hall, C428

Stanford, CA 94305-6165

(650) 723-9866
0
Andrew Grotto

Andrew J. Grotto is a research scholar at the Center for International Security and Cooperation at Stanford University.

Grotto’s research interests center on the national security and international economic dimensions of America’s global leadership in information technology innovation, and its growing reliance on this innovation for its economic and social life. He is particularly interested in the allocation of responsibility between the government and the private sector for defending against cyber threats, especially as it pertains to critical infrastructure; cyber-enabled information operations as both a threat to, and a tool of statecraft for, liberal democracies; opportunities and constraints facing offensive cyber operations as a tool of statecraft, especially those relating to norms of sovereignty in a digitally connected world; and governance of global trade in information technologies.

Before coming to Stanford, Grotto was the Senior Director for Cybersecurity Policy at the White House in both the Obama and Trump Administrations. His portfolio spanned a range of cyber policy issues, including defense of the financial services, energy, communications, transportation, health care, electoral infrastructure, and other vital critical infrastructure sectors; cybersecurity risk management policies for federal networks; consumer cybersecurity; and cyber incident response policy and incident management. He also coordinated development and execution of technology policy topics with a nexus to cyber policy, such as encryption, surveillance, privacy, and the national security dimensions of artificial intelligence and machine learning. 

At the White House, he played a key role in shaping President Obama’s Cybersecurity National Action Plan and driving its implementation. He was also the principal architect of President Trump’s cybersecurity executive order, “Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure.”

Grotto joined the White House after serving as Senior Advisor for Technology Policy to Commerce Secretary Penny Pritzker, advising Pritzker on all aspects of technology policy, including Internet of Things, net neutrality, privacy, national security reviews of foreign investment in the U.S. technology sector, and international developments affecting the competitiveness of the U.S. technology sector.

Grotto worked on Capitol Hill prior to the Executive Branch, as a member of the professional staff of the Senate Select Committee on Intelligence. He served as then-Chairman Dianne Feinstein’s lead staff overseeing cyber-related activities of the intelligence community and all aspects of NSA’s mission. He led the negotiation and drafting of the information sharing title of the Cybersecurity Act of 2012, which later served as the foundation for the Cybersecurity Information Sharing Act that President Obama signed in 2015. He also served as committee designee first for Senator Sheldon Whitehouse and later for Senator Kent Conrad, advising the senators on oversight of the intelligence community, including of covert action programs, and was a contributing author of the “Committee Study of the Central Intelligence Agency’s Detention and Interrogation Program.”

Before his time on Capitol Hill, Grotto was a Senior National Security Analyst at the Center for American Progress, where his research and writing focused on U.S. policy towards nuclear weapons - how to prevent their spread, and their role in U.S. national security strategy.

Grotto received his JD from the University of California at Berkeley, his MPA from Harvard University, and his BA from the University of Kentucky.

Research Scholar, Center for International Security and Cooperation
Director, Program on Geopolitics, Technology, and Governance
Date Label
Andrew Grotto
0
top_pick_rsd25_070_0254a.jpg

Daphne Keller is the Director of Platform Regulation at the Stanford Program in Law, Science, & Technology. Her academic, policy, and popular press writing focuses on platform regulation and Internet users'; rights in the U.S., EU, and around the world. Her recent work has focused on platform transparency, data collection for artificial intelligence, interoperability models, and “must-carry” obligations. She has testified before legislatures, courts, and regulatory bodies around the world on topics ranging from the practical realities of content moderation to copyright and data protection. She was previously Associate General Counsel for Google, where she had responsibility for the company’s web search products. She is a graduate of Yale Law School, Brown University, and Head Start.

SHORT PIECES

 

ACADEMIC PUBLICATIONS

 

POLICY PUBLICATIONS

 

FILINGS

  • U.S. Supreme Court amicus brief on behalf of Francis Fukuyama, NetChoice v. Moody (2024)
  • U.S. Supreme Court amicus brief with ACLU, Gonzalez v. Google (2023)
  • Comment to European Commission on data access under EU Digital Services Act
  • U.S. Senate testimony on platform transparency

 

PUBLICATIONS LIST

Director of Platform Regulation, Stanford Program in Law, Science & Technology (LST)
Social Science Research Scholar
Date Label
Daphne Keller
Rasmus Kleis Nielsen
Stanford Law School Neukom Building, Room N230 Stanford, CA 94305
650-725-9875
0
James B. McClatchy Professor of Law at Stanford Law School
Senior Fellow, Freeman Spogli Institute
Professor, by courtesy, Political Science
Professor, by courtesy, Communication
headshot_3.jpg

Nathaniel Persily is the James B. McClatchy Professor of Law at Stanford Law School, with appointments in the departments of Political Science, Communication, and FSI.  Prior to joining Stanford, Professor Persily taught at Columbia and the University of Pennsylvania Law School, and as a visiting professor at Harvard, NYU, Princeton, the University of Amsterdam, and the University of Melbourne. Professor Persily’s scholarship and legal practice focus on American election law or what is sometimes called the “law of democracy,” which addresses issues such as voting rights, political parties, campaign finance, redistricting, and election administration. He has served as a special master or court-appointed expert to craft congressional or legislative districting plans for Georgia, Maryland, Connecticut, New York, North Carolina, and Pennsylvania.  He also served as the Senior Research Director for the Presidential Commission on Election Administration. In addition to dozens of articles (many of which have been cited by the Supreme Court) on the legal regulation of political parties, issues surrounding the census and redistricting process, voting rights, and campaign finance reform, Professor Persily is coauthor of the leading election law casebook, The Law of Democracy (Foundation Press, 5th ed., 2016), with Samuel Issacharoff, Pamela Karlan, and Richard Pildes. His current work, for which he has been honored as a Guggenheim Fellow, Andrew Carnegie Fellow, and a Fellow at the Center for Advanced Study in the Behavioral Sciences, examines the impact of changing technology on political communication, campaigns, and election administration.  He is codirector of the Stanford Program on Democracy and the Internet, and Social Science One, a project to make available to the world’s research community privacy-protected Facebook data to study the impact of social media on democracy.  He is also a member of the American Academy of Arts and Sciences, and a commissioner on the Kofi Annan Commission on Elections and Democracy in the Digital Age.  Along with Professor Charles Stewart III, he recently founded HealthyElections.Org (the Stanford-MIT Healthy Elections Project) which aims to support local election officials in taking the necessary steps during the COVID-19 pandemic to provide safe voting options for the 2020 election. He received a B.A. and M.A. in political science from Yale (1992); a J.D. from Stanford (1998) where he was President of the Stanford Law Review, and a Ph.D. in political science from U.C. Berkeley in 2002.   

CV
Date Label
Nathaniel Persily
Travis Ridout
Alexandra A. Siegel
Joshua A. Tucker
Samuel C. Woolley
-

Image
Avi Tuschman, Adam Berinsky, David Rand

Please join the Cyber Policy Center for Exploring Potential “Solutions” to Online Disinformation​, hosted by Cyber Policy Center's Kelly Born, with guests Adam Berinsky, Mitsui Professor of Political Science at MIT and Director of the MIT Political Experiments Research Lab (PERL) at MIT, David Rand, Erwin H. Schell Professor and an Associate Professor of Management Science and Brain and Cognitive Sciences, and Director of the Human Cooperation Laboratory and the Applied Cooperation Team at MIT, and Avi Tuschman, Founder & CIO, Pinpoint Predictive. The session is open but registraton is required.

Adam Berinsky is the Mitsui Professor of Political Science at MIT and serves as the director of the MIT Political Experiments Research Lab (PERL). He is also a Faculty Affiliate at the Institute for Data, Systems, and Society (IDSS). Berinsky received his PhD from the University of Michigan in 2000. He is the author of "In Time of War: Understanding American Public Opinion from World War II to Iraq" (University of Chicago Press, 2009). He is also the author of "Silent Voices: Public Opinion and Political Participation in America" (Princeton University Press, 2004) and has published articles in many journals. He is currently the co-editor of the Chicago Studies in American Politics book series at the University of Chicago Press. He is also the recipient of multiple grants from the National Science Foundation and was a fellow at the Center for Advanced Study in the Behavioral Sciences.

David Rand is the Erwin H. Schell Professor and an Associate Professor of Management Science and Brain and Cognitive Sciences at MIT Sloan, and the Director of the Human Cooperation Laboratory and the Applied Cooperation Team. Bridging the fields of behavioral economics and psychology, David’s research combines mathematical/computational models with human behavioral experiments and online/field studies to understand human behavior. His work uses a cognitive science perspective grounded in the tension between more intuitive versus deliberative modes of decision-making, and explores topics such as cooperation/prosociality, punishment/condemnation, perceived accuracy of false or misleading news stories, political preferences, and the dynamics of social media platform behavior. 

Avi Tuschman is a Stanford StartX entrepreneur and founder of Pinpoint Predictive, where he currently serves as Chief Innovation Officer and Board Director. He’s spent the past five years developing the first Psychometric AI-powered data-enrichment platform, which ranks 260 million individuals for performance marketing and risk management applications. Tuschman is an expert on the science of heritable psychometric traits. His book and research on human political orientation have been covered in peer-reviewed and mainstream media from 25 countries. Previous to his career in tech, he advised current and former heads of state as well as multilateral development banks in the Western Hemisphere. Tuschman completed his undergraduate and doctoral degrees in evolutionary anthropology at Stanford.

0
George G.C. Parker Professor of Finance and Economics, Stanford Graduate School of Business
Director of the Corporations and Society Initiative, Stanford Graduate School of Business
Director of the Program on Capitalism and Democracy, Center on Democracy, Development and the Rule of Law
Senior Fellow, Stanford Institute for Economic Policy Research
Senior Fellow (by courtesy), Freeman Spogli Institute for International Studies
anat_admati-stanford-2021.jpg

Anat R. Admati is the George G.C. Parker Professor of Finance and Economics at Stanford University Graduate School of Business (GSB), a Faculty Director of the GSB Corporations and Society Initiative, and a senior fellow at Stanford Institute for Economic Policy Research. She has written extensively on information dissemination in financial markets, portfolio management, financial contracting, corporate governance and banking. Admati’s current research, teaching and advocacy focus on the complex interactions between business, law, and policy with focus on governance and accountability.

Since 2010, Admati has been active in the policy debate on financial regulations. She is the co-author, with Martin Hellwig, of the award-winning and highly acclaimed book The Bankers’ New Clothes: What’s Wrong with Banking and What to Do about It (Princeton University Press, 2013; bankersnewclothes.com). In 2014, she was named by Time Magazine as one of the 100 most influential people in the world and by Foreign Policy Magazine as among 100 global thinkers.

Admati holds BSc from the Hebrew University, MA, MPhil and PhD from Yale University, and an honorary doctorate from University of Zurich. She is a fellow of the Econometric Society, the recipient of multiple fellowships, research grants, and paper recognition, and is a past board member of the American Finance Association. She has served on a number of editorial boards and is a member of the FDIC’s Systemic Resolution Advisory Committee, a former member of the CFTC’s Market Risk Advisory Committee, and a former visiting scholar at the International Monetary Fund.

Date Label
News Type
Q&As
Date
Paragraphs

Image
Marietje Schaake

 

  

DOWNLOAD THE PAPER 

 

The European Union is often called a ‘super-regulator’, especially when it comes to data-protection and privacy rules. Having seen European lawmaking from close by, in all its complexities, I have often considered this qualification an exaggerated one. Yes, the European Union frequently takes the first steps in ensuring principles continue to be protected, even as digitization disrupts. However, the speed with which technology evolves versus the pace of democratic lawmaking leads to perpetual mismatches.  

Even the famous, or infamous, General Data Protection Regulation does not meet many essential regulatory needs of the moment. The mainstreaming of Artificial Intelligence in particular, poses new challenges to concepts of the protection of rights and the sustaining of the rule of law. In its White Paper on Artificial Intelligence, as well the Data Strategy, the European Commission references to the common good and the public interest, as well as societal needs as opposed to an emphasis on regulating the digital market. These are welcome steps in acknowledging the depth and scope of technological impact and defining harms not just in economic terms. It remains to be seen how the visions articulated in the White Paper and the Strategy, will translate into concrete legislation. 

One proposal to make concrete improvements to legal frameworks, is outlined by Martin Tisné in The Data Delusion. He highlights the need to update legal privacy standards to be more reflective of the harms incurred through collective data analysis, as opposed to individual privacy violations. Martin makes a clear case for addressing the discrepancy between the profit models benefitting from grouped data versus the ability of any individual to prove the harms caused to his or her rights. 

The lack of transparency into the inner workings of algorithmic processing of data further hinders the path to much needed accountability of the powerful technology businesses operating growing parts of our information architecture and the data flows they process.  

While EU takes the lead in setting values-based standards and rules for the digital layer of our societies and economies, a lot of work remains to be done. 

Marietje Schaake: Martin, in your paper you address the gap between the benefits for technology companies through collective data processing, and the harms for society. You point to historic reasons for individual privacy protections in European laws. Do you consider the European Union to be the best positioned to address the legal shortcomings, especially as you point out that some opportunities to do so were missed in the GDPR?

Martin Tisné: Europe is well positioned but perhaps not for the reasons we traditionally think of (strong privacy tradition, empowered regulators). Individual privacy alone is a necessary, but not sufficient foundation stone to build the future of AI regulation. And whilst much is made of European regulators, the GDPR has been hobbled by the lack of funding and capacity of data protection commissioners across Europe. What Europe does have though, is a legal, political and societal tradition of thinking about the public interest, the common good and how this is balanced against individual interests. This is where we should innovate, taking inspiration from environmental legislation such as the Urgenda Climate Case against the Dutch Government which established that the government had a legal duty to prevent dangerous climate change, in the name of the public interest. 

And Europe also has a lot to learn from other political and legal cultures. Part of the future of data regulation may come the indigenous data rights movement, with greater emphasis on the societal and group impacts of data, or from the concept of Ubuntu ethics that assigns community and personhood to all people. 

Schaake: What scenario do you foresee in 10 years if collective harms are not dealt with in updates of laws? 

Tisné: I worry we will see two impacts. The first is a continuation of what we are seeing now: negative impacts of digital technologies on discrimination, voting rights, privacy, consumers. As people become increasingly aware of the problem there will be a corresponding increase in legal challenges. We’re seeing this already for example with the Lloyd class action case against Google for collecting iPhone data. But I worry these will fail to stick and have lasting impact because of the obligation to have these cases turn on one person, or a class of people’s, individual experiences. It is very hard for individuals to seek remedy for collective harms, as opposed to personal privacy invasions. So unless we solve the issue I raise in the paper – the collective impact of AI and automation – these will continue to fuel polarization, discrimination on the basis of age, gender (and many other aspects of our lives) and the further strengthening of populist regimes. 

I also worry about the ways in which algorithms will optimize on the basis of seemingly random classifications (e.g. “people who wear blue shirts, get up early on Saturday mornings, and were geo-located in a particular area of town at a particular time”). These may be proxies for protected characteristics (age, gender reassignment, disability, race, religion, sex, marriage, pregnancy/maternity, sexual orientation) and provide grounds for redress. They may also not be and sow the seeds of future discrimination and harms. Authoritarian rulers are likely to take advantage of the seeming invisibility of those data-driven harms to further silence their opponents. How can I protect myself if I don’t know the basis on which I am being discriminated against or targeted? 

Schaake: How do you reflect on the difference in speed between technological innovations and democratic lawmaking? Some people imply this will give authoritarian regimes an advantage in setting global standards and rules. What are your thoughts on ensuring democratic governments speed up? 

Tisné: Democracies cannot afford to be outpaced by technological innovation and constantly be fighting yesterday’s wars. Our laws have not changed to reflect changes in technology, which extracts value from collective data, and need to catch up.  A lot of the problems stem from the fact that in government (as in companies), the people responsible for enforcement are separated from those with the technical understanding. The solution lies in much better translation between technology, policy and the needs of the public.  

An innovation and accountability-led government must involve and empower the public in co-creating policies, above and beyond the existing rules that engage individuals (consent forms etc.). In the paper I propose a Public Interest Data Bill that addresses this need: the rules of the digital highway used as a negotiation between the public and regulators, between private data consumers and data generators. Specifically: clear transparency, public participation and realistic sanctions when things go wrong.

This is where democracies should hone their advantage over authoritarian regimes – using such an approach as the basis for setting global standards and best practices (e.g. affected communities providing input into algorithmic impact assessments). 

Schaake: The protection of privacy is what sets democratic societies apart from authoritarian ones. How likely is it that we will see an effort between democracies to set legal standards across borders together? Can we overcome the political tensions across the Atlantic, and strengthen democratic alliances globally?

Tisné: I remain a big supporter of international cooperation. I helped found the Open Government Partnership ten years ago, which remains the main forum for 79 countries to develop innovative open government reforms jointly with the public. Its basic principles hold true: involve global south and global north countries with equal representation, bring civil society in jointly with government from the outset, seek out and empower reformers within government (they exist, regardless of who is in power in the given year), and go local to identify exciting innovations. 

If we heed those principles we can set legal standards by learning from open data and civic technology reforms in Taiwan, experiments with data trusts in India, legislation to hold algorithms accountable in France; and by identifying and working with the individuals driving those innovations, reformers such as Audrey Tang in Taiwan, Katarzyna Szymielewicz in Poland, and Henri Verdier in France. 

These reformers need a home, a base to influence policymakers and technologists, to get those people responsible for enforcement working with those with the technical understanding. The Global Partnership on Artificial Intelligence may be that home but these are early days, it needs to be agile enough to work with the private sector, civil society as well as governments and the international system. I remain hopeful. 

 

 

All News button
1
Subtitle

Protecting Individual Isn't Enough When the Harm is Collective. A Q&A with Marietje Schaake and Martin Tisne on his new paper The Data Delusion.

Paragraphs

Image
city skyline

The Data Delusion: Protecting Individual Data Isn't Enough When The Harm is Collective

Author: Martin Tisné, Managing Director, Luminate

Editor: Marietje Schaake, International Policy Director, Cyber Policy Center

The threat of digital discrimination

On March 17, 2018, questions about data privacy exploded with the scandal of the previously unknown consulting company Cambridge Analytica. Lawmakers are still grappling with updating laws to counter the harms of big data and AI.

In the Spring of 2020, the Covid-19 pandemic brought questions about sufficient legal protections back to the public debate, with urgent warnings about the privacy implications of contact tracing apps. But the surveillance consequences of the pandemic’s aftermath are much bigger than any app: transport, education, health systems and offices are being turned into vast surveillance networks. If we only consider individual trade-offs between privacy sacrifices and alleged health benefits, we will miss the point. The collective nature of big data means people are more impacted by other people’s data than by data about them. Like climate change, the threat is societal and personal.

In the era of big data and AI, people can suffer because of how the sum of individual data is analysed and sorted into groups by algorithms. Novel forms of collective data-driven harms are appearing as a result: online housing, job and credit ads discriminating on the basis of race and gender, women disqualified from jobs on the basis of gender and foreign actors targeting light-right groups, pulling them to the far-right. Our public debate, governments, and laws are ill-equipped to deal with these collective, as opposed to individual, harms.

Read the full paper >

 
All Publications button
1
Publication Type
White Papers
Publication Date
Authors
Marietje Schaake
-

Image
Image of Marietje Schaake, Jessica Gonzalez and David Sifry, speaking on stopping hate for profit
Tech companies are not doing enough to fight hate on their digital social platforms. But what can be done to encourage social platforms to provide more support to people who are targets of racism and hate, and to increase safety for private groups on the platform?

Join host Marietje Schaake, International Policy Director at the Cyber Policy Center, as she brings together experts from the space, to speak about what can be done to encourage platforms like Facebook to stop the spread of hate and disinformation. 

The event is open to the public, but registration is required:

Maritje Schaake: Marietje Schaake is the international policy director at Stanford University’s Cyber Policy Center and international policy fellow at Stanford’s Institute for Human-Centered Artificial Intelligence. She was named President of the Cyber Peace Institute. Between 2009 and 2019, Marietje served as a Member of European Parliament for the Dutch liberal democratic party where she focused on trade, foreign affairs and technology policies. Marietje is affiliated with a number of non-profits including the European Council on Foreign Relations and the Observer Research Foundation in India and writes a monthly column for the Financial Times and a bi-monthly column for the Dutch NRC newspaper. 

Jessica Gonzalez: An accomplished attorney and racial-justice advocate, Jessica works closely with the executive team and key stakeholders to develop and execute strategies to advance Free Press’ mission. A former Lifeline recipient, Jessica has helped fend off grave Trump administration cuts to the program, which helps provide phone-and-internet access for low-income people. She was part of the legal team that overturned a Trump FCC decision blessing runaway media consolidation. She also co-founded Change the Terms, a coalition of more than 50 civil- and digital-rights groups that works to disrupt online hate. Previously, Jessica was the executive vice president and general counsel at the National Hispanic Media Coalition, where she led the policy shop and helped coordinate campaigns against racist and xenophobic media programming. Prior to that she was a staff attorney and teaching fellow at Georgetown Law’s Institute for Public Representation. Jessica has testified before Congress on multiple occasions, including during a Net Neutrality hearing in the House while suffering from acute morning sickness, and during a Senate hearing while eight months pregnant to advocate for affordable internet access.

David Sifry: As Vice President of the Center for Technology and Society (CTS), Dave Sifry leads a team of innovative technologists, researchers, and policy experts developing proactive solutions and producing cutting-edge research to protect vulnerable populations. In its efforts to advocate change at all levels of society, CTS serves as a vital resource to legislators, journalists, universities, community organizations, tech platforms and anyone who has been a target of online hate and harassment. Dave joined ADL in 2019 after a storied career as a technology entrepreneur and executive. He founded six companies including Linuxcare and Technorati, and served in executive roles at companies including Lyft and Reddit. In addition to his entrepreneurial work, Dave was selected as a Technology Pioneer at The World Economic Forum, and is an advisor and mentor for a select group of companies and startup founders. As the son of a hidden child of the Holocaust, the core values and mission exemplified by ADL were instilled in him at an early age.

Panel Discussions
Authors
News Type
Commentary
Date
Paragraphs

President Donald Trump’s chief arms control envoy last week acknowledged the possibility that the 2010 New Strategic Arms Reduction Treaty (New START) could be extended, but he added, “only under select circumstances.”  He then put down conditions that, if adhered to, will ensure the Trump administration does not extend the treaty.

New START and Extension

New START limits the United States and Russia each to no more than 700 deployed strategic missiles and bombers and no more than 1,550 deployed strategic warheads.  It expires by its terms on February 5, 2021 but can be extended for up to five years.  The Trump administration has adamantly refused to do that.

From the perspective of U.S. national security interests, extending New START is a no-brainer.  As confirmed by the State Department’s annual report, Russia is complying with the treaty’s limits.  Extension would keep Russian strategic forces constrained until 2026.  It would also ensure the continued flow of information about those forces produced by the treaty’s data exchanges, notifications, on-site inspections and other verification measures.

And extension would not force a single change in U.S. plans to modernize its strategic forces, as those plans were designed to fit within New START’s limits.

Russian officials, including Vladimir Putin, have raised New START extension since the first days of the Trump administration.  In 2017, Trump administration officials deferred on the issue, saying they would consider extension after (1) completion of a nuclear posture review and (2) seeing whether Russia met the treaty’s limits, which took full effect in February 2018.

Russia fully met the limits in February 2018.  At about the same time, the administration issued its nuclear posture review.  Yet, more than two years later, New START extension remains an open question.

On June 24, Amb. Marshall Billingslea, the president arms control envoy, briefed the press on his meeting with his Russian counterpart two days before in Vienna.  Asked about extending New START, Amb. Billingslea—never a fan of the treaty or, it seems, any arms control treaty—left the option open.  However, he described three conditions that will block extension.

China

Amb. Billingslea’s first condition focused on China, which he claimed had “an obligation to negotiate with [the United States] and Russia.”  Beijing certainly does not see it that way—saying no, no and again no—citing the huge disparity between the size of the Chinese nuclear arsenal and those of the United States and Russia.  China has less than one-tenth the number of nuclear warheads of each of the two nuclear superpowers.

To be sure, including China in the nuclear arms control process is desirable.  But Beijing will not join a negotiation aimed at a trilateral agreement.  What would such an agreement look like?  Neither Washington nor Moscow would agree to reduce to China’s level (about 300 nuclear warheads).  Nothing suggests either would agree to legitimize a Chinese build-up to match their levels (about 4,000 each).  Beijing presumably would not be interested in unequal limits.

This perhaps explains why, well more than one year after it began calling for China’s inclusion, the Trump administration appears to have no proposal or outline or even principles for a trilateral agreement.

For its part, Moscow would welcome China limiting its nuclear arms.  The Russians, however, choose not press the question, raising instead Britain and France.  Amb. Billingslea pooh-poohed the notion, but France has as many nuclear weapons as China, and Britain has two-thirds the Chinese number.  The logic for bringing in one but not the other two is unclear.  The question raises yet another hinderance to including China.

A more nuanced approach might prove more successful.  It would entail a new U.S.-Russian agreement providing for reductions beyond those mandated by New START.  Washington and Moscow could then ask the Chinese (and British and French) to provide transparency on their nuclear weapons numbers and agree not to increase their total weapons or exceed a specified number.  Much like his president, however, the arms control envoy does not appear to be into nuance.

Non-Strategic Nuclear Weapons

Amb. Billingslea’s second condition dealt with including in a new negotiation nuclear arms not constrained by New START, especially Russia’s large number of non-strategic nuclear weapons.  Again, this is laudable goal, but getting there will require much time and unpalatable decisions that the Trump administration will not want to face.

Russian officials have regularly tied their readiness to discuss non-strategic nuclear arms to issues of concern to them, particularly missile defense.  The Trump administration,  however, has made clear that it has zero interest in negotiating missile defense.

Even if Moscow severed that linkage, negotiating limits on non-strategic nuclear weapons would take time.  New START limits deployed strategic warheads by virtue of their association with deployed strategic missiles and bombers.  The only warheads directly counted are those on deployed intercontinental ballistic missiles and submarine-launched ballistic missiles.

By contrast, most if not all non-strategic warheads are not mounted on their delivery systems.  Monitoring any agreed limits would require new procedures, including for conducting on-site inspections within storage facilities.  This does not pose an insoluble challenge, but it represents new territory for both Washington and Moscow.  Working out limits, counting rules and verification measures will prove neither quick nor easy.

Verification

Amb. Billingslea earlier suggested some dissatisfaction with New START’s verification measures, though he did not articulate any particular flaw, and, as noted, the State Department’s annual compliance report says Russia is meeting the treaty’s terms.  Last week, he made verification measures for his desired U.S.-Russia-China agreement the third condition for New START extension. 

Verification measures are critical.  Treaty parties have to have confidence that all sides are observing the agreement’s limits or, at a minimum, that any militarily significant violation would be detected in time to take countervailing measures.  Working out agreement on those measures will prove a long process, even in just a bilateral negotiation, especially if it addresses issues such as stored nuclear weapons.  That is not just because of Russian reluctance to accept intrusive verification measures such as on-site inspection; the U.S. military also wants verification measures that do not greatly impact its normal operations.

Russian officials have reiterated their readiness to extend New START now.  Amb. Billingslea’s conditions will thwart extension for the foreseeable future.  That’s unfortunate.  By not extending New START, the Trump administration forgoes a simple action that would strengthen U.S. national security and make Americans safer.

Hero Image
a minuteman iii intercontinental ballistic missile icbm is launched off the a3ab79 The U.S. National Archives
All News button
1
Subtitle

President Donald Trump’s chief arms control envoy last week acknowledged the possibility that the 2010 New Strategic Arms Reduction Treaty (New START) could be extended, but he added, “only under select circumstances.” He then put down conditions that, if adhered to, will ensure the Trump administration does not extend the treaty.

Image
Tech and Wellbeing in the Era of Covid-19
Please join the Cyber Policy Center for Tech & Wellbeing in the Era of Covid-19 with Jeff Hancock from Stanford University, Amy Orben from Emmanuel College, and Erica Pelavin, Co-Founder of My Digital TAT2, in conversation with Kelly Born, Executive Director of the Cyber Policy Center. The session will explore the risks and opportunities technologies pose to users’ wellbeing; what we know about the impact of technology on mental health, particularly for teens; how the current pandemic may change our perceptions of technology; and ways in which teens are using apps, influencers and platforms to stay connected under Covid-19.

 

Dr. Amy Orben is College Research Fellow at Emmanuel College and the MRC Cognition and Brain Sciences Unit. Her work using large-scale datasets to investigate social media use and teenage mental health has been published in a range of leading scientific journals. The results have put into question many long-held assumptions about the potential risks and benefits of ’screen time'. Alongside her research, Amy campaigns for the use of improved statistical methodology in the behavioural sciences and the adoption of more transparent and open scientific practices, having co-founded the global ReproducibiliTea initiative. Amy also regularly contributes to both media and policy debate, having recently given evidence to the UK Commons Science and Technology Select Committee and various governmental investigations.

Jeff Hancock is founding director of the Stanford Social Media Lab and is a Professor in the Department of Communication at Stanford University. Professor Hancock and his group work on understanding psychological and interpersonal processes in social media. The team specializes in using computational linguistics and experiments to understand how the words we use can reveal psychological and social dynamics, such as deception and trust, emotional dynamics, intimacy and relationships, and social support. Recently Professor Hancock has begun work on understanding the mental models people have about algorithms in social media, as well as working on the ethical issues associated with computational social science.

Erica Pelavin, is an educator, public speaker, and Co-Founder and Director of Teen Engagement at My Digital TAT2. Working from a strength-based perspective, Erica has expertise in bullying prevention, relational aggression, digital safety, social emotional learning, and conflict resolution. Dr. Pelavin has a passion for helping young people develop the skills to become their own advocates and cares deeply about helping school communities foster empathy and respect. In her role at My Digital TAT2, Erica leads all programming for high schoolers including the youth led podcast Media in the Middle, the teen advisory boards and an annual summer internship program. Her work with teens directly impacts and informs the developmental school based curriculum. Erica is also a high school counselor at Eastside College Prep in East Palo Alto, CA.

Watch the recorded session

Subscribe to Russia and Eurasia