Q&A with Daphne Keller of the Program on Platform Regulation

Keller explains some of the issues currently surrounding platform regulation
Q&A with Daphne Keller

Daphne Keller leads the newly launched Program on Platform Regulation a program designed to offer lawmakers, academics, and civil society groups ground-breaking analysis and research to support wise governance of Internet platforms.

Q: Facebook, YouTube and Twitter rely on algorithms and artificial intelligence to provide services for their users. Could AI also help in protecting free speech and policing hate speech and disinformation?   

DK: Platforms increasingly rely on artificial intelligence and other algorithmic means to automate the process of assessing – and sometimes deleting – online speech. But tools like AI can’t really “understand” what we are saying, and automated tools for content moderation make mistakes all the time. We should worry about platforms’ reliance on automation, and worry even more about legal proposals that would make such automated filters mandatory. Constitutional and human rights law give us a legal framework to push back on such proposals, and to craft smarter rules about the use of AI. I wrote about these issues in this New York Times op ed and in some very wonky academic analysis in the Journal of European and International IP Law.

Q: Can you explain the potential impacts on citizens’ rights when the platforms have global reach but governments do not?

DK: On one hand, people worry about platforms displacing the legitimate power of democratic governments. On the other hand, platforms can actually expand state power in troubling ways. One way they do that is by enforcing a particular country’s speech rules everywhere else in the world. Historically that meant a net export of U.S. speech law and values, as American companies applied those rules to their global platforms. More recently, we’ve seen that trend reversed, with things like European and Indian courts requiring Facebook to take user posts down globally – even if the users’ speech would be legally protected in other countries. Governments can also use soft power, or economic leverage based on their control of access to lucrative markets, to convince platforms to “voluntarily” globally enforce that country’s preferred speech rules. That’s particularly troubling, since the state influence may be invisible to any given users whose rights are affected.   

There is such a pressing need for thoughtful work on the laws that govern Internet platforms right now, and this is the place to do it... We have access to the people who are making these decisions and who have the greatest expertise in the operational realities of the tech platforms.
Daphne Keller
Director of Program on Platform Regulation, Cyber Policy Center Lecturer, Stanford Law School

Q: Are there other ways that platforms can expand state power? 

DK: Yes, platforms can let states bypass democratic accountability and constitutional limits by using private platforms as proxies for their own agenda. States that want to engage in surveillance or censorship are constrained by the U.S. Constitution, and by human rights laws around the world. But platforms aren’t. If you’re a state and you want to do something that would violate the law if you did it yourself, it’s awfully tempting to coerce or persuade a platform to do it for you. This issue of platforms being proxies for other actors isn’t limited to governments – anyone with leverage over a platform, including business partners, can potentially play a hidden role like this.

I wrote about this complicated nexus of state and private power in Who Do You Sue? for the Hoover Institution.    

Q: What inspired you to create the Program on Platform Regulation at the Cyber Policy Center right now?

DK: There is such a pressing need for thoughtful work on the laws that govern Internet platforms right now, and this is the place to do it. At the Cyber Policy Center, there’s an amazing group of experts, like Marietje Schaake, Eileen Donahoe, Alex Stamos and Nate Persily, who are working on overlapping issues. We can address different aspects of the same issues and build on each other’s work to do much more together than we could individually.

The program really benefits from being at Stanford and in Silicon Valley because we have access to the people who are making these decisions and who have the greatest expertise in the operational realities of the tech platforms. 

The Cyber Policy Center is part of the Freeman Spogli Institute for International Studies at Stanford University.