-

* Please note all CISAC events are scheduled using the Pacific Time Zone.

 

Register in advance for this webinar: https://stanford.zoom.us/webinar/register/8416226562432/WN_WLYcdRa6T5Cs1MMdmM0Mug

 

About the Event: Is there a place for illegal or nonconsensual evidence in security studies research, such as leaked classified documents? What is at stake, and who bears the responsibility, for determining source legitimacy? Although massive unauthorized disclosures by WikiLeaks and its kindred may excite qualitative scholars with policy revelations, and quantitative researchers with big-data suitability, they are fraught with methodological and ethical dilemmas that the discipline has yet to resolve. I argue that the hazards from this research—from national security harms, to eroding human-subjects protections, to scholarly complicity with rogue actors—generally outweigh the benefits, and that exceptions and justifications need to be articulated much more explicitly and forcefully than is customary in existing work. This paper demonstrates that the use of apparently leaked documents has proliferated over the past decade, and appeared in every leading journal, without being explicitly disclosed and defended in research design and citation practices. The paper critiques incomplete and inconsistent guidance from leading political science and international relations journals and associations; considers how other disciplines from journalism to statistics to paleontology address the origins of their sources; and elaborates a set of normative and evidentiary criteria for researchers and readers to assess documentary source legitimacy and utility. Fundamentally, it contends that the scholarly community (researchers, peer reviewers, editors, thesis advisors, professional associations, and institutions) needs to practice deeper reflection on sources’ provenance, greater humility about whether to access leaked materials and what inferences to draw from them, and more transparency in citation and research strategies.

View Written Draft Paper

 

About the Speaker: Christopher Darnton is a CISAC affiliate and an associate professor of national security affairs at the Naval Postgraduate School. He previously taught at Reed College and the Catholic University of America, and holds a Ph.D. in Politics from Princeton University. He is the author of Rivalry and Alliance Politics in Cold War Latin America (Johns Hopkins, 2014) and of journal articles on US foreign policy, Latin American security, and qualitative research methods. His International Security article, “Archives and Inference: Documentary Evidence in Case Study Research and the Debate over U.S. Entry into World War II,” won the 2019 APSA International History and Politics Section Outstanding Article Award. He is writing a book on the history of US security cooperation in Latin America, based on declassified military documents.

Virtual Seminar

Christopher Darnton Associate Professor of National Security Affairs Naval Postgraduate School
Seminars
Authors
Riana Pfefferkorn
Riana Pfefferkorn
News Type
Blogs
Date
Paragraphs

When we’re faced with a video recording of an event—such as an incident of police brutality—we can generally trust that the event happened as shown in the video. But that may soon change, thanks to the advent of so-called “deepfake” videos that use machine learning technology to show a real person saying and doing things they haven’t.

This technology poses a particular threat to marginalized communities. If deepfakes cause society to move away from the current “seeing is believing” paradigm for video footage, that shift may negatively impact individuals whose stories society is already less likely to believe. The proliferation of video recording technology has fueled a reckoning with police violence in the United States, recorded by bystanders and body-cameras. But in a world of pervasive, compelling deepfakes, the burden of proof to verify authenticity of videos may shift onto the videographer, a development that would further undermine attempts to seek justice for police violence. To counter deepfakes, high-tech tools meant to increase trust in videos are in development, but these technologies, though well-intentioned, could end up being used to discredit already marginalized voices. 

(Content Note: Some of the links in this piece lead to graphic videos of incidents of police violence. Those links are denoted in bold.)

Recent police killings of Black Americans caught on camera have inspired massive protests that have filled U.S. streets in the past year. Those protests endured for months in Minneapolis, where former police officer Derek Chauvin was convicted this week in the murder of George Floyd, a Black man. During Chauvin’s trial, another police officer killed Daunte Wright just outside Minneapolis, prompting additional protests as well as the officer’s resignation and arrest on second-degree manslaughter charges. She supposedly mistook her gun for her Taser—the same mistake alleged in the fatal shooting of Oscar Grant in 2009, by an officer whom a jury later found guilty of involuntary manslaughter (but not guilty of a more serious charge). All three of these tragic deaths—George Floyd, Daunte Wright, Oscar Grant—were documented in videos that were later used (or, in Wright’s case, seem likely to be used) as evidence at the trials of the police officers responsible. Both Floyd’s and Wright’s deaths were captured by the respective officers’ body-worn cameras, and multiple bystanders with cell phones recorded the Floyd and Grant incidents. Some commentators credit a 17-year-old Black girl’s video recording of Floyd’s death for making Chauvin’s trial happen at all.

The growth of the movement for Black lives in the years since Grant’s death in 2009 owes much to the rise in the availability, quality, and virality of bystander videos documenting police violence, but this video evidence hasn’t always been enough to secure convictions. From Rodney King’s assailants in 1992 to Philando Castile’s shooter 25 years later, juries have often declined to convict police officers even in cases where wanton police violence or killings are documented on video. Despite their growing prevalence, police bodycams have had mixed results in deterring excessive force or impelling accountability. That said, bodycam videos do sometimes make a difference, helping to convict officers in the killings of Jordan Edwards in Texas and Laquan McDonald in Chicago. Chauvin’s defense team pitted bodycam footage against the bystander videos employed by the prosecution, and lost.

What makes video so powerful? Why does it spur crowds to take to the streets and lawyers to showcase it in trials? It’s because seeing is believing. Shot at differing angles from officers’ point of view, bystander footage paints a fuller picture of what happened. Two people (on a jury, say, or watching a viral video online) might interpret a video two different ways. But they’ve generally been able to take for granted that the footage is a true, accurate record of something that really happened. 

That might not be the case for much longer. It’s now possible to use artificial intelligence to generate highly realistic “deepfake” videos showing real people saying and doing things they never said or did, such as the recent viral TikTok videos depicting an ersatz Tom Cruise. You can also find realistic headshots of people who don’t exist at all on the creatively-named website thispersondoesnotexist.com. (There’s even a cat version.) 

While using deepfake technology to invent cats or impersonate movie stars might be cute, the technology has more sinister uses as well. In March, the Federal Bureau of Investigation issued a warning that malicious actors are “almost certain” to use “synthetic content” in disinformation campaigns against the American public and in criminal schemes to defraud U.S. businesses. The breakneck pace of deepfake technology’s development has prompted concerns that techniques for detecting such imagery will be unable to keep up. If so, the high-tech cat-and-mouse game between creators and debunkers might end in a stalemate at best. 

If it becomes impossible to reliably prove that a fake video isn’t real, a more feasible alternative might be to focus instead on proving that a real video isn’t fake. So-called “verified at capture” or “controlled-capture” technologies attach additional metadata to imagery at the moment it’s taken, to verify when and where the footage was recorded and reveal any attempt to tamper with the data. The goal of these technologies, which are still in their infancy, is to ensure that an image’s integrity will stand up to scrutiny. 

Photo and video verification technology holds promise for confirming what’s real in the age of “fake news.” But it’s also cause for concern. In a society where guilty verdicts for police officers remain elusive despite ample video evidence, is even more technology the answer? Or will it simply reinforce existing inequities? 

The “ambitious goal” of adding verification technology to smartphone chipsets necessarily entails increasing the cost of production. Once such phones start to come onto the market, they will be more expensive than lower-end devices that lack this functionality. And not everyone will be able to afford them. Black Americans and poor Americans have lower rates of smartphone ownership than whites and high earners, and are more likely to own a “dumb” cell phone. (The same pattern holds true with regard to educational attainment and urban versus rural residence.) Unless and until verification technology is baked into even the most affordable phones, it risks replicating existing disparities in digital access. 

That has implications for police accountability, and, by extension, for Black lives. Primed by societal concerns about deepfakes and “fake news,” juries may start expecting high-tech proof that a video is real. That might lead them to doubt the veracity of bystander videos of police brutality if they were captured on lower-end phones that lack verification technology. Extrapolating from current trends in phone ownership, such bystanders are more likely to be members of marginalized racial and socioeconomic groups. Those are the very people who, as witnesses in court, face an uphill battle in being afforded credibility by juries. That bias, which reared its ugly head again in the Chauvin trial, has long outlived the 19th-century rules that explicitly barred Black (and other non-white) people from testifying for or against white people on the grounds that their race rendered them inherently unreliable witnesses. 

In short, skepticism of “unverified” phone videos may compound existing prejudices against the owners of those phones. That may matter less in situations where a diverse group of numerous eyewitnesses record a police brutality incident on a range of devices. But if there is only a single bystander witness to the scene, the kind of phone they own could prove significant.

The advent of mobile devices empowered Black Americans to force a national reckoning with police brutality. Ubiquitous, pocket-sized video recorders allow average bystanders to document the pandemic of police violence. And because seeing is believing, those videos make it harder for others to continue denying the problem exists. Even with the evidence thrust under their noses, juries keep acquitting police officers who kill Black people. Chauvin’s conviction this week represents an exception to recent history: Between 2005 and 2019, of the 104 law enforcement officers charged with murder or manslaughter in connection with a shooting while on duty, 35 were convicted

The fight against fake videos will complicate the fight for Black lives. Unless it is equally available to everyone, video verification technology may not help the movement for police accountability, and could even set it back. Technological guarantees of videos’ trustworthiness will make little difference if they are accessible only to the privileged, whose stories society already tends to believe. We might be able to tech our way out of the deepfakes threat, but we can’t tech our way out of America’s systemic racism. 

Riana Pfefferkorn is a research scholar at the Stanford Internet Observatory

Read More

Riana Pfefferkorn
News

Q&A with Riana Pfefferkorn, Stanford Internet Observatory Research Scholar

Riana Pfefferkorn joined the Stanford Internet Observatory as a research scholar in December. She comes from Stanford’s Center for Internet and Society, where she was the Associate Director of Surveillance and Cybersecurity.
Q&A with Riana Pfefferkorn, Stanford Internet Observatory Research Scholar
A member of the All India Student Federation teaches farmers about social media and how to use such tools as part of ongoing protests against the government. (Pradeep Gaur / SOPA Images / Sipa via Reuters Connect)
Blogs

New Intermediary Rules Jeopardize the Security of Indian Internet Users

New Intermediary Rules Jeopardize the Security of Indian Internet Users
All News button
1
Authors
Daphne Keller
News Type
Blogs
Date
Paragraphs

I am a huge fan of transparency about platform content moderation. I’ve considered it a top policy priority for years, and written about it in detail (with Paddy Leerssen, who also wrote this great piece about recommendation algorithms and transparency). I sincerely believe that without it, we are unlikely to correctly diagnose current problems or arrive at wise legal solutions.

So it pains me to admit that I don’t really know what “transparency” I’m asking for. I don’t think many other people do, either. Researchers and public interest advocates around the world can agree that more transparency is better. But, aside from people with very particular areas of interest (like political advertising), almost no one has a clear wish list. What information is really important? What information is merely nice to have? What are the trade-offs involved?

That imprecision is about to become a problem, though it’s a good kind of problem to have. A moment of real political opportunity is at hand. Lawmakers in the USEurope, and elsewhere are ready to make some form of transparency mandatory. Whatever specific legal requirements they create will have huge consequences. The data, content, or explanations they require platforms to produce will shape our future understanding of platform operations, and our ability to respond — as consumers, as advocates, or as democracies. Whatever disclosures the laws don’t require, may never happen.

It’s easy to respond to this by saying “platforms should track all the possible data, we’ll see what’s useful later!” Some version of this approach might be justified for the very biggest “gatekeeper” or “systemically important” platforms. Of course, making Facebook or Google save all that data would be somewhat ironic, given the trouble they’ve landed in by storing similar not-clearly-needed data about their users in the past. (And the more detailed data we store about particular takedowns, the likelier it is to be personally identifiable.)

For any platform, though, we should recognize that the new practices required for transparency reporting comes at a cost. That cost might include driving platforms to adopt simpler, blunter content rules in their Terms of Service. That would reduce their expenses in classifying or explaining decisions, but presumably lead to overly broad or narrow content prohibitions. It might raise the cost of adding “social features” like user comments enough that some online businesses, like retailers or news sites, just give up on them. That would reduce some forms of innovation, and eliminate useful information for Internet users. For small and midsized platforms, transparency obligations (like other expenses related to content moderation) might add yet another reason to give up on competing with today’s giants, and accept an acquisition offer from an incumbent that already has moderation and transparency tools. Highly prescriptive transparency obligations might also drive de facto standardization and homogeneity in platform rules, moderation practices, and features.

None of these costs provides a reason to give up on transparency — or even to greatly reduce our expectations. But all of them are reasons to be thoughtful about what we ask for. It would be helpful if we could better quantify these costs, or get a handle on what transparency reporting is easier and harder to do in practice.

I’ve made a (very in the weeds) list of operational questions about transparency reporting, to illustrate some issues that are likely to arise in practice. I think detailed examples like these are helpful in thinking through both which kinds of data matter most, and how much precision we need within particular categories. For example, I personally want to know with great precision how many government orders a platform received, how it responded, and whether any orders led to later judicial review. But to me it seems OK to allow some margin of error for platforms that don’t have standardized tracking and queuing tools, and that as a result might modestly mis-count TOS takedowns (either by absolute numbers or percent).

I’ll list that and some other recommendations below. But these “recommendations” are very tentative. I don’t know enough to have a really clear set of preferences yet. There are things I wish I could learn from technologists, activists, and researchers first. The venues where those conversations would ordinarily happen — and, importantly, where observers from very different backgrounds and perspectives could have compared the issues they see, and the data they most want — have been sadly reduced for the past year.

So here is my very preliminary list:

  • Transparency mandates should be flexible enough to accommodate widely varying platform practices and policies. Any de facto push toward standardization should be limited to the very most essential data.
  • The most important categories of data are probably the main ones listed in the DSA: number of takedowns, number of appeals, number of successful appeals. But as my list demonstrates, those all can become complicated in practice.
  • It’s worth taking the time to get legal transparency mandates right. That may mean delegating exact transparency rules to regulatory agencies in some countries, or conducting studies prior to lawmaking in others.
  • Once rules are set, lawmakers should be very reluctant to move the goalposts. If a platform (especially a smaller one) invests in rebuilding its content moderation tools to track certain categories of data, it should not have to overhaul those tools soon because of changed legal requirements.
  • We should insist on precise data in some cases, and tolerate more imprecision in others (based on the importance of the issue, platform capacity, etc.). And we should take the time to figure out which is which.
  • Numbers aren’t everything. Aggregate data in transparency reports ultimately just tell us what platforms themselves think is going on. To understand what mistakes they make, or what biases they may exhibit, independent researchers need to see the actual content involved in takedown decisions. (This in turn raises a slough of issues about storing potentially unlawful content, user privacy and data protection, and more.)

It’s time to prioritize. Researchers and civil society should assume we are operating with a limited transparency “budget,” which we must spend wisely — asking for the information we can best put to use, and factoring in the cost. We need better understanding of both research needs and platform capabilities to do this cost-benefit analysis well. I hope that the window of political opportunity does not close before we manage to do that.

Daphne Keller

Daphne Keller

Director of the Program on Platform Regulation
BIO

Read More

Cover of the EIP report "The Long Fuse: Misinformation and the 2020 Election"
News

Election Integrity Partnership Releases Final Report on Mis- and Disinformation in 2020 U.S. Election

Researchers from Stanford University, the University of Washington, Graphika and Atlantic Council’s DFRLab released their findings in ‘The Long Fuse: Misinformation and the 2020 Election.’
Election Integrity Partnership Releases Final Report on Mis- and Disinformation in 2020 U.S. Election
Daphne Keller QA
Q&As

Q&A with Daphne Keller of the Program on Platform Regulation

Keller explains some of the issues currently surrounding platform regulation
Q&A with Daphne Keller of the Program on Platform Regulation
twitter takedown headliner
Blogs

Analysis of February 2021 Twitter Takedowns

In this post and in the attached reports we investigate a Twitter network attributed to actors in Armenia, Iran, and Russia.
Analysis of February 2021 Twitter Takedowns
All News button
1
Subtitle

In a new blog post, Daphne Keller, Director of the Program on Platform Regulation at the Cyber Policy Center, looks at the need for transparency when it comes to content moderation and asks, what kind of transparency do we really want?

-

End-to-end encrypted (E2EE) communications have been around for decades, but the deployment of default E2EE on billion-user platforms has new impacts for user privacy and safety. The deployment comes with benefits to both individuals and society but it also creates new risks, as long-existing models of messenger abuse can now flourish in an environment where automated or human review cannot reach. New E2EE products raise the prospect of less understood risks by adding discoverability to encrypted platforms, allowing contact from strangers and increasing the risk of certain types of abuse. This workshop will place a particular focus on platform benefits and risks that impact civil society organizations, with a specific focus on the global south. Through a series of workshops and policy papers, the Stanford Internet Observatory is facilitating open and productive dialogue on this contentious topic to find common ground. 

An important defining principle behind this workshop series is the explicit assumption that E2EE is here to stay. To that end, our workshops have set aside any discussion of exceptional access (aka backdoor) designs. This debate has raged between industry, academic cryptographers and law enforcement for decades and little progress has been made. We focus instead on interventions that can be used to reduce the harm of E2E encrypted communication products that have been less widely explored or implemented. 

Submissions for working papers and requests to attend will be accepted up to 10 days before the event. Accepted submitters will be invited to present or attend our upcoming workshops. 

SUBMIT HERE

Webinar

Workshops
-

Please note: the start time for this event has been moved from 3:00 to 3:15pm.

Join FSI Director Michael McFaul in conversation with Richard Stengel, Under Secretary of State for Public Diplomacy and Public Affairs. They will address the role of entrepreneurship in creating stable, prosperous societies around the world.

Richard Stengel Undersecretary of State for Public Diplomacy and Public Affairs Special Guest United States Department of State

Encina Hall
616 Jane Stanford Way
Stanford, CA 94305-6055

0
Director, Freeman Spogli Institute for International Studies
Ken Olivier and Angela Nomellini Professor of International Studies, Department of Political Science
Peter and Helen Bing Senior Fellow, Hoover Institution
2022-mcfaul-headshot.jpg
PhD

Michael McFaul is Director at the Freeman Spogli Institute for International Studies, the Ken Olivier and Angela Nomellini Professor of International Studies in the Department of Political Science, and the Peter and Helen Bing Senior Fellow at the Hoover Institution. He joined the Stanford faculty in 1995.

Dr. McFaul also is as an International Affairs Analyst for NBC News and a columnist for The Washington Post. He served for five years in the Obama administration, first as Special Assistant to the President and Senior Director for Russian and Eurasian Affairs at the National Security Council at the White House (2009-2012), and then as U.S. Ambassador to the Russian Federation (2012-2014).

He has authored several books, most recently the New York Times bestseller From Cold War to Hot Peace: An American Ambassador in Putin’s Russia. Earlier books include Advancing Democracy Abroad: Why We Should, How We Can; Transitions To Democracy: A Comparative Perspective (eds. with Kathryn Stoner); Power and Purpose: American Policy toward Russia after the Cold War (with James Goldgeier); and Russia’s Unfinished Revolution: Political Change from Gorbachev to Putin.

His current research interests include American foreign policy, great power relations, and the relationship between democracy and development. Dr. McFaul was born and raised in Montana. He received his B.A. in International Relations and Slavic Languages and his M.A. in Soviet and East European Studies from Stanford University in 1986. As a Rhodes Scholar, he completed his D. Phil. in International Relations at Oxford University in 1991. He is currently writing a book on great power relations in the 21st century.

 

 

CV
Moderator
Panel Discussions
Authors
Steve Fyffe
News Type
News
Date
Paragraphs

The United States has a growing inventory of spent nuclear fuel from commercial power plants that continues to accumulate at reactor sites around the country.

In addition, the legacy waste from U.S. defense programs remains at Department of Energy sites around the country, mainly at Hanford, WA, Savannah River, SC, and at Idaho National Laboratory.

Image
But now the U.S. nuclear waste storage program is “frozen in place”, according to Rod Ewing, Frank Stanton professor in nuclear security at Stanford’s Center for International Security and Cooperation.

“The processing and handling of waste is slow to stopped and in this environment the pressure has become very great to do something.”

Currently, more than seventy thousand metric tons of spent nuclear fuel from civilian reactors is sitting in temporary aboveground storage facilities spread across 35 states, with many of the reactors that produced it shut down.  And U.S. taxpayers are paying the utilities billions of dollars to keep it there.

Meanwhile, the deep geologic repository where all that waste was supposed to go, in Yucca Mountain Nevada, is now permanently on hold, after strong resistance from Nevada residents and politicians led by U.S. Senator Harry Reid.

The Waste Isolation Pilot Plant in Carlsbad New Mexico, the world’s first geologic repository for transuranic waste, has been closed for over a year due to a release of radioactivity.

And other parts of the system, such as the vitrification plant at Hanford and the mixed oxide fuel plant at Savannah River , SC, are way behind schedule and over budget.

It’s a growing problem that’s unlikely to change this political season.

“The chances of dealing with it in the current Congress are pretty much nil, in my view,” said former U.S. Senator Jeff Bingaman (D-NM).

“We’re not going to see a solution to this problem this year or next year.”

The issue in Congress is generally divided along political lines, with Republicans wanting to move forward with the original plan to build a repository at Yucca Mountain, while Democrats support the recommendations of the Blue Ribbon Commission on America’s Nuclear Future to create a new organization to manage nuclear waste in the U.S. and start looking for a new repository location using an inclusive, consent-based process.

“One of the big worries that I have with momentum loss is loss of nuclear competency,” said David Clark, a Fellow at the Los Alamos National Laboratory.

Image
“So we have a whole set of workers who have been trained, and have been working on these programs for a number of years. When you put a program on hold, people go find something else to do.”

Meanwhile, other countries are moving ahead with plans for their own repositories, with Finland and Sweden leading the pack, leaving the U.S. lagging behind.

So Ewing decided to convene a series of high-level conferences, where leading academics and nuclear experts from around the world can discuss the issues in a respectful environment with a diverse range of stakeholders – including former politicians and policy makers, scientists and representatives of Indian tribes and other effected communities.

“For many of these people and many of these constituencies, I’ve seen them argue at length, and it’s usually in a situation where a lot seems to be at stake and it’s very adversarial,” said Ewing.

“So by having the meeting at Stanford, we’ve all taken a deep breath, the program is frozen in place, nothing’s going to go anywhere tomorrow, we have the opportunity to sit and discuss things. And I think that may help.”

Former Senator Bingaman said he hoped the multidisciplinary meetings, known at the “Reset of Nuclear Waste Management Strategy and Policy Series”, would help spur progress on this pressing problem.

“There is a high level of frustration by people who are trying to find a solution to this problem of nuclear waste, and there’s no question that the actions that we’ve taken thus far have not gotten us very far,” Bingaman said.

“I think that’s why this conference that is occurring is a good thing, trying to think through what are the problems that got us into the mess we’re in, and how do we avoid them in the future.”

The latest conference, held earlier this month, considered the question of how to structure a new nuclear waste management organization in the U.S.

Speakers from Sweden, Canada and France brought an international perspective and provided lessons learned from their countries nuclear waste storage programs.

“The other…major programs, France, Switzerland, United Kingdom, Canada, they all reached a crisis point, not too different from our own,” said Ewing.

“And at this crisis point they had to reevaluate how they would go forward. They each chose a slightly different path, but having thought about it, and having selected a new path, one can also observe that their programs are moving forward.”

France has chosen to adopt a closed nuclear cycle to recycle spent fuel and reuse it to generate more electricity.

Image
“It means that the amount of waste that we have to dispose of is only four percent of the total volume of spent nuclear fuel which comes out of the reactor,” said Christophe Poinssot of the French Atomic and Alternative Energy Commission.

“We also reduce the toxicity because…we are removing the plutonium. And finally, we are conditioning the final waste under the form of nuclear glass, the lifetime of which is very long, in the range of a million years in repository conditions.”

Clark said that Stanford was the perfect place to convene a multidisciplinary group of thought leaders in the field who could have a real impact on the future of nuclear waste storage policy.

“The beauty of a conference like this, and holding it at a place like Stanford University and CISAC, is that all the right people are here,” he said.

“All the people who are here have the ability to influence, through some level of authority and scholarship, and they’ll be able to take the ideas that they’ve heard back to their different offices and different organizations.  I think it will make a difference, and I’m really happy to be part of it.”

Ewing said it was also important to include students in the conversation.

“There’s a next generation of researchers coming online, and I want to save them the time that it took me to realize what the problems are,” Ewing said.

“By mixing students into this meeting, letting them interact with all the parties, including the distinguished scientists and engineers, I’m hoping it speeds up the process.”

Professor Ewing is already planning his next conference, next March, which will focus on the consent-based process that will be used to identify a new location within the U.S. for a repository.

All News button
1
News Type
News
Date
Paragraphs

Russ Feingold, the former U.S. senator perhaps best known for pushing campaign finance reform, will spend the spring quarter at Stanford lecturing and teaching.

Feingold will be the Payne Distinguished Lecturer and will be in residence at the Freeman Spogli Institute for International Studies while teaching and mentoring graduate students in the Ford Dorsey Program in International Policy Studies and the Stanford Law School.

Feingold was recently the State Department’s  special envoy to the Great Lakes Region of Africa and the Democratic Republic of Congo. He will bring his knowledge and longstanding interest in one of the most challenging, yet promising, places in Africa to campus with the cross-listed IPS and Law School course, “The Great Lakes Region of Africa and American Foreign Relations: Policy and Legal Implications of the Post-1994 Era.”

Feingold, a Wisconsin Democrat who served three terms in the Senate between 1993 and 2011, co-sponsored the Bipartisan Campaign Reform Act of 2002. Better known as the McCain-Feingold Act, the legislation regulated the roles of soft money contributions and issue ads in national elections.

Hero Image
All News button
1
-
Atheendar Venkataramani is an Assistant Professor in the Department of Medical Ethics and Health Policy and a staff physician at the Penn Presbyterian Medical Center. Dr. Venkataramani is a health economist who studies the life-course origins of health and socioeconomic inequality. His research, which combines insights from economics, epidemiology, and clinical medicine, spans both domestic and international settings.

Venkataramani Photo

 

 

 

After registering, you will receive a confirmation email containing information about joining the meeting.

Registration

 

Hybrid Seminar: Lunch will be provided for on-campus participants.
Please register if you plan to attend, both for in-person and via Zoom.

Log in on your computer, or join us in person:
Encina Commons, Room 119
615 Crothers Way
Stanford, CA 94305

Seminars
-
Natalia Serna is a Ph.D candidate in Economics from the University of Wisconsin-Madison. Her main research interests are in the intersection of industrial organization and health economics. In her latest paper, she studies the impact of risk selection on the breadth of hospital networks. Her future research agenda will further study cost-sharing, consumer inertia, and market power in health insurance, as well as government regulation of health service prices and medications.

Natalia Serna Photo

 

 

 

After registering, you will receive a confirmation email containing information about joining the meeting.

Registration

 

Hybrid Seminar: Lunch will be provided for on-campus participants.
Please register if you plan to attend, both for in-person and via Zoom.

Log in on your computer, or join us in person:
Encina Commons, Room 119
615 Crothers Way
Stanford, CA 94305

Seminars
-
Dr. Natalia Kunst is a decision sciences and health economics researcher who focuses on applying decision-analytic and statistical methods in cancer, genetics and precision medicine to assess and identify efficient strategies that would improve patients’ health outcomes, and to design and prioritize clinical research in limited-resource settings, also focusing on health disparities. Dr. Kunst is a Senior Advisor at the Norwegian Directorate of Health. Additionally, part of her time is dedicated to teaching and research as an Associate Professor in the Department of Health Management and Health Economics at the University of Oslo, Norway.


Natalia Kunst Photo

 

 

 

After registering, you will receive a confirmation email containing information about joining the meeting.

Registration

 

Hybrid Seminar: Lunch will be provided for on-campus participants.
Please register if you plan to attend, both for in-person and via Zoom.

Log in on your computer, or join us in person:
Encina Commons, Room 119
615 Crothers Way
Stanford, CA 94305

Seminars
Subscribe to The Americas
Top