Authors
News Type
Commentary
Date
Paragraphs

Jason Reinhardt was leading a multi-lab effort within Sandia National Laboratories to improve the United States’ ability to detect and prevent illegal nuclear material from entering the country.

A senior member of Sandia’s technical staff at that time, Reinhardt would explain the technical dimensions of his work to policy experts and inevitably hear the same questions: “How do I understand the risk?” or “How do I compare the different risks involved?”

Reinhardt wanted to know more about the discipline of risk analysis so, in 2011, he returned to Stanford (where he’d earned an M.S. in Electrical Engineering in 2005) to pursue a PhD in Management Science and Engineering, with Prof. M. E. Paté-Cornell, the department’s founding chair, as his advisor. He focused on creating a systematic and risk analytic look at the technical and political components of nuclear deterrence.

Reinhardt also worked with Siegfried Hecker, a professor in Management Science and Engineering who was then the Science Co-Director of the Center for International Security and Cooperation (CISAC). Hecker introduced Reinhardt to his CISAC colleagues and encouraged him to attend the weekly seminars where political scientists, historians and diplomats presented their ideas on critical international issues.

“CISAC was one of the places where policy-inclined technical people could sort of bathe in policy discussions for a while,” Reinhardt said. “I use that terminology, really just soak in it.”

In the seminars, lectures and other events, Reinhardt studied how policy experts thought and talked. “I came over there and started listening,” he said. “Oh, that’s what the debate’s really about. I’m a lab geek, I thought it was a technical problem.”

If the problem were strictly technical, Reinhardt would have been able to speak with authority. He had no trouble discussing probability distributions, modeling approaches and complicated mathematical equations with other science-minded souls. But nuclear deterrence demands collaboration across academic disciplines—the hard sciences as well as political theory, international relations, and economics—and Reinhardt wanted policy makers to see the full picture and understand his ideas and their implications for policy.

“CISAC was a bootcamp of how to interact in the policy world, how to understand how that world thinks and acts,” Reinhardt said.

While pursuing his PhD, Reinhardt accepted a pre-doctoral fellowship at CISAC and enjoyed exploring this new world. But, an engineer by training, he also wanted to dig into a project where he could flex his technical skills while sitting elbow-to-elbow with political scientists, international relations experts and other policy wonks.

Hecker, an internationally-recognized expert in nuclear security and a former director of Los Alamos National Labs, understood the desire and had the solution. Hecker had been working with the Russian government for years, beginning after the Soviet Union broke apart in 1989, to secure Russian nuclear assets. For nearly as long, he’d also been working with the Chinese government to make sure their nuclear assets did not fall into the wrong hands.

Man speaking to room full of people Siegfried Hecker presenting to American and Chinese national security scholars.

Hecker invited Reinhardt to join the project at the Stanford Center at Peking University, a mini-campus that serves as a bridge across the Pacific for faculty and students from Stanford’s seven schools.

“Jason was just superb,” Hecker said of Reinhardt. “When you combine his Sandia background with his work with Eisabeth Pate-Cornell at MS&E, you have some of the world’s leading expertise in systems analysis which means a very methodical, engineering look at how you make decisions under complex environments.” 

In China, Reinhardt teamed with Larry Brandt and Leonard Connell, who were both CISAC affiliates and risk analysts at Sandia, to create a course that applied a systems analysis approach to nuclear terrorism. They ran the exercise with Chinese professionals to explore the probability of terrorists obtaining and transporting nuclear materials.

“We had a proper seat to learn how Track II interactions between countries are done,” Reinhardt said. 

Man behind a podium Jason Reinhardt giving talk at SCPKU on systems approach to verification of North Korea’s Nuclear program in Beijing, Oct. 2019.

Hecker, Reinhardt and the others traveled back and forth to China a few times a year—until COVID stopped international trips—to share their knowledge and deepen the understanding of the risks. The experience energized Reinhardt.

“Where else are you going to get that?” he asked. “I was able to sit down and have a technical analytic discussion about a nuclear issue with Chinese researchers who are thinking about the same thing.”

On Stanford’s campus, Reinhardt often found himself in equally intense conversations with CISAC faculty and international security experts like former Secretary of Defense William Perry, Scott Sagan, a leading authority on the politics of nuclear risk, and Martha Crenshaw, who is among the world’s top experts in terrorism.

Reinhardt also observed courses like “International Security in a Changing World,” which Crenshaw co-taught with Amy Zegart, a political scientist who advised the Clinton and Bush Administrations on foreign policy, national security and intelligence. 

When he returned to Sandia, with the wealth of international experience and a newly minted PhD, Reinhardt was quickly promoted to a role where he oversees 20-some people who focus on risk analysis around cyber threats to critical infrastructure in the US.

“Essentially, I build methodology for people to think about really nasty problems from a risk perspective in a national security sphere,” he said. “I’ve worked on that for nuclear weapons, for deterrence, and now for cyber stuff.”

Reinhardt also spends time educating colleagues so individuals on either side of the tech/policy divide can talk to one another. And he’s engaging with Purdue University, where he earned his undergraduate degree in electrical engineering, through Sandia’s Academic Alliance, to help propose a new course, learning from those he observed at Stanford such as the one Crenshaw and Zegart taught.

When Reinhardt reflects on his time at CISAC, he says it didn’t convert him from a technical expert into a policy expert as much as it introduced him to their world and allowed him to be more effective working within it.

“Because of the fellowship, you’re going to understand how policy people think and you’re going to understand their world enough that you can actually talk to them,” Reinhardt said. “And hopefully, if you do your job right, they’ll start to understand the technical world so that they can talk to you.”

Picture of building The Stanford Center at Peking University.

This is the first in an on-going series of profiles of CISAC pre- and post-doctoral fellows.

Hero Image
Several people sitting at a long table.
Jason Reinhardt participated the 16th PIIC Beijing Seminar on “Maintaining Global Strategic Stability and Promoting International Nuclear Cooperation” in October 16-17, 2019 in Shenzhen, China.
Siegfried Hecker
All News button
1
Subtitle

Profile of a CISAC Fellow: Jason Reinhardt, Distinguished Member of Technical Staff at Sandia National Laboratories

Authors
News Type
Blogs
Date
Paragraphs

Starting something new from scratch is always challenging. Though it requires huge amounts of effort and contains the possibility of not working out, I believe that it is absolutely worth exploring a new challenge because it has the power of creating chances of making people happier. This is the most important thing I learned from the people who took the initiative to establish the wonderful program, Stanford e-Japan.

Though it was the inaugural year of the program when I joined in 2015, I was truly impressed not only with the high quality of the academic content, but also with the rich opportunities of communication with prestigious leaders from various fields. Moreover, the program generously offered the top three students the chance to visit Stanford University for a ceremony.

It was exhilarating to be in the program due to the endless surprises and new learnings that I encountered throughout the course. 

When I reflect on the efforts made by the people who actively led the establishment and management of such an amazing program, I realize that I couldn’t appreciate them enough for what they have done for us.
Haruki Kitagawa

Since then, I have resolved to initiate new challenges myself in order to contribute to younger students just as Stanford e-Japan Instructor Waka Brown did for me. After I returned to Keio University from a one-year university exchange program at the University of California, San Diego, I established a student-led organization with several members at Keio from diverse backgrounds. Our student-led organization aims to cultivate young global citizens of Japan by allowing students attending Japanese high schools to have meaningful interactions with international students from Japanese universities like Keio.

In addition to encouraging the high school students to explore new challenges, I also wanted to share how interesting it is to learn about different cultures, including the histories of foreign countries and the benefits of interacting with people who have different backgrounds. We focus on designing an environment so that high school students can actively discuss and exchange ideas with international students in person while also building their English presentation skills. Through our program, we believe every high school student has the opportunity to learn something new like communication skills with individuals of different backgrounds, the ability to reach a mutual understanding with people of differing opinions, and leadership skills to lead discussions in a diverse community.

During our programs at several high schools, I have been able to hear many voices from the high school students, international students, and even high school teachers that suggest that they have fortunately had meaningful and fruitful experiences during our programs. Despite some initial struggles, I now strongly believe that even small programs like ours can make a difference in our society. I will never forget the precious lessons learned from Stanford e-Japan, and perhaps the most important lesson is for me to continue to explore new challenges and to encourage young students to do so as well.

Read More

120219 3010
News

Announcing the Honorees of SPICE’s 2019–20 Regional Programs in Japan

Announcing the Honorees of SPICE’s 2019–20 Regional Programs in Japan
Stanford e-Japan student Ayano Hirose giving her final presentation
News

Winners Announced for the Fall 2019 Stanford e-Japan Award

Winners Announced for the Fall 2019 Stanford e-Japan Award
Hero Image
ejapanphoto
2015 Stanford e-Japan Honorees: Seiji Wakabayashi, Hikaru Suzuki, and Haruki Kitagawa
All News button
1
Subtitle

The following reflection is a guest post written by Haruki Kitagawa, a 2015 alum and honoree of the Stanford e-Japan Program.

Authors
Siegfried S. Hecker
News Type
Commentary
Date
Paragraphs

Seventy-five years ago, before 5:30 a.m. on July 16, 1945, Los Alamos scientists successfully conducted the world’s first nuclear weapons test. The test, which physicist J. Robert Oppenheimer named "Trinity" after a line from a poem by John Donne, altered the course of World War II, changed the way scientific discoveries are pursued, and cemented the relationship between science and national security.

Siegfried Hecker, a senior fellow at Stanford’s Center for International Security and Cooperation, worked at Los Alamos National Laboratory for over three decades and served as director for nearly 12 of those years. He joined CISAC in 2005 and served as the Center’s Science Co-director from 2007 – 2012.

In a video produced by Los Alamos to commemorate the historic events of 1945, Hecker reflects on the meaning of that moment. Here, Hecker answers questions to place those events into context of today’s national security landscape and his current work.

As you say in the video, the Trinity project brought scientists from all over the world to Los Alamos and asked them to collaborate on the most sensitive project for the American government. At the time, that must have seemed radical, but did multidisciplinary, international collaboration become the norm?

 It may seem odd, but it would be more difficult today than it was then. The U.S. was at war and concerned about Hitler’s Germany winning the race to the atomic bomb. It was actually the Brits that tried to convince President Roosevelt to mount a major effort to build the bomb. It was Europe that was the center of great physics at the time and it was Hitler who caused many of the best scientists to flee Europe and come to the United States – we welcomed them with open arms. We had the industrial capacity to mount such an enormous enterprise and did not have the enemy at our doorstep. But we needed their scientific skills and could not have developed the bomb in 27 months without them. Unfortunately, today we have retreated to more of a bunker mentality and are not as welcoming as we were then. For that matter, we’re not as welcoming as we were in 1956, when America allowed me to immigrate from Austria.  

 

The scientists involved in this project had the agility to switch designs as they made new discoveries. Could you describe the type of talent and skills that allowed them to pursue new ideas so quickly?

Success of the Manhattan Project is typically viewed as the work of physicists. But it was really an incredible array of talent – spanning physics, chemistry, mathematics, computing, engineering, materials and others, that allowed it to deal with surprises like the gun-assembly not working with plutonium. That collaboration also allowed the team to redirect its energy when they found out that although plutonium may have been the physicist’s dream, it was an engineering nightmare. The metallurgists found a fix by adding a bit of gallium as I explain in the video. Understanding why that’s so occupied a good part of my scientific life at Los Alamos.

 

How does your time at Los Alamos National Laboratory relate to the work you do now with students and pre- and post-doctoral fellows at CISAC?

Once the Soviet Union dissolved at the end of 1991, I turned much of my attention to working with the Russian nuclear establishment to mitigate the new nuclear dangers resulting from the political chaos. My Russian nuclear colleagues and I captured twenty-plus years of collaboration in our book Doomed to Cooperate. It turned out that CISAC became a great place for me to continue this work in 2005 and to expand it to the other nuclear countries around the world. Once at Stanford, I found that one of the most rewarding things I could do was to teach and work with students and post-docs. That’s what I continue to do today in what we call Young Professionals Nuclear Forums. We bring together around a dozen young Americans to work with their counterparts in Russia on nuclear challenges. We do the same with Chinese and American young professionals.

 

Since its founding, CISAC has always had two directors—one with a science background and the other from the social sciences. As both a former director of CISAC and Los Alamos, can you explain how an academic center like CISAC, with that kind of combined leadership, can help to prepare the next generation of thinkers in international security?

That’s one of the things that attracted me to CISAC. From CISAC’s founding days of John Lewis (political science) and Sid Drell (physics), the Center has tackled problems at the intersection of the natural and social sciences. And, that’s where the hard problems lie. By focusing on the challenges that arise at this intersection, CISAC can help to educate the next generation of national security specialists to tackle the world’s difficult problems. It’s a great place to be if you are interested in international security.

 

Watch The Science of Trinity 

Hero Image
Images of the Manhattan Project scientists
All News button
1
Subtitle

Siegfried Hecker, former director of both Los Alamos National Laboratories and the Center for International Cooperation and Security, reflects on the meaning of the Trinity nuclear weapons test and its implications for national security today.

Paragraphs

Security studies scholarship on nuclear weapons is particularly prone to self-censorship. In this essay, I argue that this self-censorship is problematic.

Read Nuclear Weapons Scholarship as a Case of Self-Censorship in Security Studies

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
Journal of Global Security Studies
Authors
Paragraphs

Polls in the United States and nine allied countries in Europe and Asia show that public support for a nuclear test is very low. If the Trump administration conducts a test, then it shouldn’t expect backing from Americans or its closest U.S. partners.

Read more at The National Interest

All Publications button
1
Publication Type
Commentary
Publication Date
Authors
0
George G.C. Parker Professor of Finance and Economics, Stanford Graduate School of Business
Director of the Corporations and Society Initiative, Stanford Graduate School of Business
Director of the Program on Capitalism and Democracy, Center on Democracy, Development and the Rule of Law
Senior Fellow, Stanford Institute for Economic Policy Research
Senior Fellow (by courtesy), Freeman Spogli Institute for International Studies
anat_admati-stanford-2021.jpg

Anat R. Admati is the George G.C. Parker Professor of Finance and Economics at Stanford University Graduate School of Business (GSB), a Faculty Director of the GSB Corporations and Society Initiative, and a senior fellow at Stanford Institute for Economic Policy Research. She has written extensively on information dissemination in financial markets, portfolio management, financial contracting, corporate governance and banking. Admati’s current research, teaching and advocacy focus on the complex interactions between business, law, and policy with focus on governance and accountability.

Since 2010, Admati has been active in the policy debate on financial regulations. She is the co-author, with Martin Hellwig, of the award-winning and highly acclaimed book The Bankers’ New Clothes: What’s Wrong with Banking and What to Do about It (Princeton University Press, 2013; bankersnewclothes.com). In 2014, she was named by Time Magazine as one of the 100 most influential people in the world and by Foreign Policy Magazine as among 100 global thinkers.

Admati holds BSc from the Hebrew University, MA, MPhil and PhD from Yale University, and an honorary doctorate from University of Zurich. She is a fellow of the Econometric Society, the recipient of multiple fellowships, research grants, and paper recognition, and is a past board member of the American Finance Association. She has served on a number of editorial boards and is a member of the FDIC’s Systemic Resolution Advisory Committee, a former member of the CFTC’s Market Risk Advisory Committee, and a former visiting scholar at the International Monetary Fund.

Date Label
News Type
Q&As
Date
Paragraphs

Image
Marietje Schaake

 

  

DOWNLOAD THE PAPER 

 

The European Union is often called a ‘super-regulator’, especially when it comes to data-protection and privacy rules. Having seen European lawmaking from close by, in all its complexities, I have often considered this qualification an exaggerated one. Yes, the European Union frequently takes the first steps in ensuring principles continue to be protected, even as digitization disrupts. However, the speed with which technology evolves versus the pace of democratic lawmaking leads to perpetual mismatches.  

Even the famous, or infamous, General Data Protection Regulation does not meet many essential regulatory needs of the moment. The mainstreaming of Artificial Intelligence in particular, poses new challenges to concepts of the protection of rights and the sustaining of the rule of law. In its White Paper on Artificial Intelligence, as well the Data Strategy, the European Commission references to the common good and the public interest, as well as societal needs as opposed to an emphasis on regulating the digital market. These are welcome steps in acknowledging the depth and scope of technological impact and defining harms not just in economic terms. It remains to be seen how the visions articulated in the White Paper and the Strategy, will translate into concrete legislation. 

One proposal to make concrete improvements to legal frameworks, is outlined by Martin Tisné in The Data Delusion. He highlights the need to update legal privacy standards to be more reflective of the harms incurred through collective data analysis, as opposed to individual privacy violations. Martin makes a clear case for addressing the discrepancy between the profit models benefitting from grouped data versus the ability of any individual to prove the harms caused to his or her rights. 

The lack of transparency into the inner workings of algorithmic processing of data further hinders the path to much needed accountability of the powerful technology businesses operating growing parts of our information architecture and the data flows they process.  

While EU takes the lead in setting values-based standards and rules for the digital layer of our societies and economies, a lot of work remains to be done. 

Marietje Schaake: Martin, in your paper you address the gap between the benefits for technology companies through collective data processing, and the harms for society. You point to historic reasons for individual privacy protections in European laws. Do you consider the European Union to be the best positioned to address the legal shortcomings, especially as you point out that some opportunities to do so were missed in the GDPR?

Martin Tisné: Europe is well positioned but perhaps not for the reasons we traditionally think of (strong privacy tradition, empowered regulators). Individual privacy alone is a necessary, but not sufficient foundation stone to build the future of AI regulation. And whilst much is made of European regulators, the GDPR has been hobbled by the lack of funding and capacity of data protection commissioners across Europe. What Europe does have though, is a legal, political and societal tradition of thinking about the public interest, the common good and how this is balanced against individual interests. This is where we should innovate, taking inspiration from environmental legislation such as the Urgenda Climate Case against the Dutch Government which established that the government had a legal duty to prevent dangerous climate change, in the name of the public interest. 

And Europe also has a lot to learn from other political and legal cultures. Part of the future of data regulation may come the indigenous data rights movement, with greater emphasis on the societal and group impacts of data, or from the concept of Ubuntu ethics that assigns community and personhood to all people. 

Schaake: What scenario do you foresee in 10 years if collective harms are not dealt with in updates of laws? 

Tisné: I worry we will see two impacts. The first is a continuation of what we are seeing now: negative impacts of digital technologies on discrimination, voting rights, privacy, consumers. As people become increasingly aware of the problem there will be a corresponding increase in legal challenges. We’re seeing this already for example with the Lloyd class action case against Google for collecting iPhone data. But I worry these will fail to stick and have lasting impact because of the obligation to have these cases turn on one person, or a class of people’s, individual experiences. It is very hard for individuals to seek remedy for collective harms, as opposed to personal privacy invasions. So unless we solve the issue I raise in the paper – the collective impact of AI and automation – these will continue to fuel polarization, discrimination on the basis of age, gender (and many other aspects of our lives) and the further strengthening of populist regimes. 

I also worry about the ways in which algorithms will optimize on the basis of seemingly random classifications (e.g. “people who wear blue shirts, get up early on Saturday mornings, and were geo-located in a particular area of town at a particular time”). These may be proxies for protected characteristics (age, gender reassignment, disability, race, religion, sex, marriage, pregnancy/maternity, sexual orientation) and provide grounds for redress. They may also not be and sow the seeds of future discrimination and harms. Authoritarian rulers are likely to take advantage of the seeming invisibility of those data-driven harms to further silence their opponents. How can I protect myself if I don’t know the basis on which I am being discriminated against or targeted? 

Schaake: How do you reflect on the difference in speed between technological innovations and democratic lawmaking? Some people imply this will give authoritarian regimes an advantage in setting global standards and rules. What are your thoughts on ensuring democratic governments speed up? 

Tisné: Democracies cannot afford to be outpaced by technological innovation and constantly be fighting yesterday’s wars. Our laws have not changed to reflect changes in technology, which extracts value from collective data, and need to catch up.  A lot of the problems stem from the fact that in government (as in companies), the people responsible for enforcement are separated from those with the technical understanding. The solution lies in much better translation between technology, policy and the needs of the public.  

An innovation and accountability-led government must involve and empower the public in co-creating policies, above and beyond the existing rules that engage individuals (consent forms etc.). In the paper I propose a Public Interest Data Bill that addresses this need: the rules of the digital highway used as a negotiation between the public and regulators, between private data consumers and data generators. Specifically: clear transparency, public participation and realistic sanctions when things go wrong.

This is where democracies should hone their advantage over authoritarian regimes – using such an approach as the basis for setting global standards and best practices (e.g. affected communities providing input into algorithmic impact assessments). 

Schaake: The protection of privacy is what sets democratic societies apart from authoritarian ones. How likely is it that we will see an effort between democracies to set legal standards across borders together? Can we overcome the political tensions across the Atlantic, and strengthen democratic alliances globally?

Tisné: I remain a big supporter of international cooperation. I helped found the Open Government Partnership ten years ago, which remains the main forum for 79 countries to develop innovative open government reforms jointly with the public. Its basic principles hold true: involve global south and global north countries with equal representation, bring civil society in jointly with government from the outset, seek out and empower reformers within government (they exist, regardless of who is in power in the given year), and go local to identify exciting innovations. 

If we heed those principles we can set legal standards by learning from open data and civic technology reforms in Taiwan, experiments with data trusts in India, legislation to hold algorithms accountable in France; and by identifying and working with the individuals driving those innovations, reformers such as Audrey Tang in Taiwan, Katarzyna Szymielewicz in Poland, and Henri Verdier in France. 

These reformers need a home, a base to influence policymakers and technologists, to get those people responsible for enforcement working with those with the technical understanding. The Global Partnership on Artificial Intelligence may be that home but these are early days, it needs to be agile enough to work with the private sector, civil society as well as governments and the international system. I remain hopeful. 

 

 

All News button
1
Subtitle

Protecting Individual Isn't Enough When the Harm is Collective. A Q&A with Marietje Schaake and Martin Tisne on his new paper The Data Delusion.

Paragraphs

Image
city skyline

The Data Delusion: Protecting Individual Data Isn't Enough When The Harm is Collective

Author: Martin Tisné, Managing Director, Luminate

Editor: Marietje Schaake, International Policy Director, Cyber Policy Center

The threat of digital discrimination

On March 17, 2018, questions about data privacy exploded with the scandal of the previously unknown consulting company Cambridge Analytica. Lawmakers are still grappling with updating laws to counter the harms of big data and AI.

In the Spring of 2020, the Covid-19 pandemic brought questions about sufficient legal protections back to the public debate, with urgent warnings about the privacy implications of contact tracing apps. But the surveillance consequences of the pandemic’s aftermath are much bigger than any app: transport, education, health systems and offices are being turned into vast surveillance networks. If we only consider individual trade-offs between privacy sacrifices and alleged health benefits, we will miss the point. The collective nature of big data means people are more impacted by other people’s data than by data about them. Like climate change, the threat is societal and personal.

In the era of big data and AI, people can suffer because of how the sum of individual data is analysed and sorted into groups by algorithms. Novel forms of collective data-driven harms are appearing as a result: online housing, job and credit ads discriminating on the basis of race and gender, women disqualified from jobs on the basis of gender and foreign actors targeting light-right groups, pulling them to the far-right. Our public debate, governments, and laws are ill-equipped to deal with these collective, as opposed to individual, harms.

Read the full paper >

 
All Publications button
1
Publication Type
White Papers
Publication Date
Authors
Marietje Schaake
-

This event will be live streamed on Zoom. RSVP required: https://stanford.zoom.us/webinar/register/WN_MKEESYy6SZWjqlQ5YuQC9w

Martin Luther King, Jr., is best known for his "I Have a Dream" speech, but if we look at his Nobel lecture and final works, it is clear that he is much more than a civil rights leader. In the lecture, he makes clear his global vision and addresses what he termed the "giant triplets of evil": racial injustice, poverty, and war. King perceives the ultimate challenge that we continue to face today: "We have inherited a large house, a great 'world house' in which we have to live together — black and white, Easterner and Westerner, Gentile and Jew, Catholic and Protestant, Muslim and Hindu — a family unduly separated in ideas, culture and interest, who, because we can never again live apart, must learn somehow to live with each other in peace." As I try to help build King's "World House," I find myself returning to his unanswered question: where do we go from here?

Clayborne Carson is the founder of Stanford’s Martin Luther King Jr. Research and Education Institute and the Martin Luther King Jr. Centennial Professor of History at Stanford University.

Michael McFaul is the Ken Olivier and Angela Nomellini Professor of International Studies in Political Science, Director and Senior Fellow at the Freeman Spogli Institute for International Studies, and the Peter and Helen Bing Senior Fellow at the Hoover Institution, all at Stanford University.

Lectures
-

Image
Image of Marietje Schaake, Jessica Gonzalez and David Sifry, speaking on stopping hate for profit
Tech companies are not doing enough to fight hate on their digital social platforms. But what can be done to encourage social platforms to provide more support to people who are targets of racism and hate, and to increase safety for private groups on the platform?

Join host Marietje Schaake, International Policy Director at the Cyber Policy Center, as she brings together experts from the space, to speak about what can be done to encourage platforms like Facebook to stop the spread of hate and disinformation. 

The event is open to the public, but registration is required:

Maritje Schaake: Marietje Schaake is the international policy director at Stanford University’s Cyber Policy Center and international policy fellow at Stanford’s Institute for Human-Centered Artificial Intelligence. She was named President of the Cyber Peace Institute. Between 2009 and 2019, Marietje served as a Member of European Parliament for the Dutch liberal democratic party where she focused on trade, foreign affairs and technology policies. Marietje is affiliated with a number of non-profits including the European Council on Foreign Relations and the Observer Research Foundation in India and writes a monthly column for the Financial Times and a bi-monthly column for the Dutch NRC newspaper. 

Jessica Gonzalez: An accomplished attorney and racial-justice advocate, Jessica works closely with the executive team and key stakeholders to develop and execute strategies to advance Free Press’ mission. A former Lifeline recipient, Jessica has helped fend off grave Trump administration cuts to the program, which helps provide phone-and-internet access for low-income people. She was part of the legal team that overturned a Trump FCC decision blessing runaway media consolidation. She also co-founded Change the Terms, a coalition of more than 50 civil- and digital-rights groups that works to disrupt online hate. Previously, Jessica was the executive vice president and general counsel at the National Hispanic Media Coalition, where she led the policy shop and helped coordinate campaigns against racist and xenophobic media programming. Prior to that she was a staff attorney and teaching fellow at Georgetown Law’s Institute for Public Representation. Jessica has testified before Congress on multiple occasions, including during a Net Neutrality hearing in the House while suffering from acute morning sickness, and during a Senate hearing while eight months pregnant to advocate for affordable internet access.

David Sifry: As Vice President of the Center for Technology and Society (CTS), Dave Sifry leads a team of innovative technologists, researchers, and policy experts developing proactive solutions and producing cutting-edge research to protect vulnerable populations. In its efforts to advocate change at all levels of society, CTS serves as a vital resource to legislators, journalists, universities, community organizations, tech platforms and anyone who has been a target of online hate and harassment. Dave joined ADL in 2019 after a storied career as a technology entrepreneur and executive. He founded six companies including Linuxcare and Technorati, and served in executive roles at companies including Lyft and Reddit. In addition to his entrepreneurial work, Dave was selected as a Technology Pioneer at The World Economic Forum, and is an advisor and mentor for a select group of companies and startup founders. As the son of a hidden child of the Holocaust, the core values and mission exemplified by ADL were instilled in him at an early age.

Panel Discussions
Subscribe to The Americas