Science and Technology
Paragraphs

Europe’s non-coercive form of global influence on technology governance faces new challenges and opportunities in the world of artificial intelligence regulations and governance. As the United States and China pursue divergent models of competition and control, Europe must evolve from exporting regulation to exercising genuine governance. The challenge is to transform regulatory strength into strategic capability, while balancing human rights, innovation, and digital sovereignty. By advancing a new Brussels Agenda grounded in values, institutional coherence, and multi-stakeholder collaboration, Europe can reaffirm its global role, demonstrating that ethical governance and technological ambition don’t need to be opposing forces in the age of intelligent systems.

ABOUT THE VOLUME

Designing Europe’s Future: AI as a Force of Good

AI is not just a technological tool; it is a transformative force that can make our societies more prosperous, sustainable, and free – if we dare to embrace it.

All Publications button
1
Publication Type
Book Chapters
Publication Date
Subtitle

Essay within "Designing Europe’s Future: AI as a Force of Good," published by the European Liberal Forum EUPF (ELF), edited by Francesco Cappelletti, Maartje Schulz, and Eloi Borgne.

Journal Publisher
European Liberal Forum EUPF
Authors
Charles Mok
Authors
Melissa Morgan
News Type
News
Date
Paragraphs

The story of Silicon Valley is one of perpetual reinvention and innovation. During the Cold War, farmlands that had grown produce transformed into research facilities where major breakthroughs in aerospace, defense, and data processing were made. With support from  the U.S. government, technologies like GPS, Google, Siri, would grow.

This ecosystem of innovation continues to evolve today. While public sector programs continue to lead in areas such as nuclear weapons research and classified defense technologies, private companies and startups are increasingly outpacing government labs in critical technology areas such as artificial intelligence, cloud computing, energy systems, and space launch. 

With so much economic, defense, and societal potential built into these technologies, creating effective partnerships between private companies and government is more important than ever.

In “Silicon Valley & The U.S. Government,” Stanford students, and now the public, have a front row seat to hear how these collaborations took root. First launched by Ernestine Fu Mak in 2016 as small, closed-door sessions, the series has expanded into a class where students and the public alike can hear directly from technology experts, business executives, and public service leaders about the past, present, and future of how their industries overlap.

“When national missions generated in Washington meet the ingenuity and drive resident in our nation’s premier hub of innovation, world changing technological breakthroughs follow,” says Joe Felter, a lecturer and director of the Gordian Knot Center for National Security Innovation, which is based at the Freeman Spogli Institute for International Studies. “The Silicon Valley & The U.S. Government series exposes students in real time to how this partnership and collaboration continues to help us meet national security and other critical emerging challenges.”

The course is offered through the Civil & Environmental Engineering Department and Ford Dorsey Master’s in International Policy program, and co-led by Mak, Steve Blank, Joe Felter, and Eric Volmar, with ongoing support from Steve Bowsher. All of the seminars are available via the playlist below, with more being released throughout fall quarter.

Mak, who is co-director of Stanford Frontier Technology Lab and an investor in national security startups at Brave Capital, explains the importance of fostering these kinds of connections and bringing students into the conversation.

“The future of national security depends on collaboration, and this seminar is our effort to help forge those connections,” she says. “It’s been exciting to watch it evolve—and continue to grow—into a platform that bridges communities that rarely share the same room: students, technologists, policymakers, investors, and public-sector innovators.”

In its early years, the series featured government leaders like former Secretary of Defense Bill Perry, founders of pioneering companies in satellite imagery and robotics, and leaders from organizations such as the Department of Energy’s ARPA-E. More recently, CEOs like Hidden Level's Jeff Cole, whose company develops stealth and radar technology, and Baiju Bhatt of Aetherflux, a space solar power venture, have joined the discussion series.

Strengthening this flow of expertise between government and innovation hubs like Silicon Valley is key to the future and success of both sectors, and the students of today will be the leaders and policymakers of tomorrow driving those ventures, observes Eric Volmar, the teaching lead at the Gordian Knot Center.

"In modern entrepreneurship, every founder needs to be thinking about the policy aspects of their technologies. In modern government, every leader needs to be thinking about how emerging technologies affect national priorities,” says Volmer. “Tech and policy are fusing together, and our whole purpose is to prepare students for this new era.”

By giving students the opportunity to hear the personal accounts of innovators who have paved the way in addressing national issues and societal challenges through entrepreneurship, the co-leaders of “Silicon Valley & The U.S. Government” hope to encourage students to do the same.

“Students are looking to be inspired—to be mission-driven. Service to the country is one of those missions. Hearing how others have answered the call is what these seminars are all about," says Steve Blank, a lecturer and founding member of the Gordian Knot Center.

“Silicon Valley & The U.S. Government” meets once per week each fall and spring quarter. It can be found in the Stanford Courses catalogue as CEE 252, and is cross-listed for students in the Ford Dorsey Master’s in International Policy program as INTLPOL 300V. Recent sessions of the course are posted online every two weeks.

Read More

Students from Gordian Knot Center classes at the White House with NSC Senior Director for Technology and National Security Tarun Chhabra in Washington D.C.
News

AI-augmented Class Tackles National Security Challenges of the Future

In classes taught through the Freeman Spogli Institute’s Gordian Knot Center, artificial intelligence is taking a front and center role in helping students find innovative solutions to global policy issues.
AI-augmented Class Tackles National Security Challenges of the Future
Amy Zegart
News

Studying the secret world of spycraft

Amy Zegart has devoted her career to understanding national security challenges and emerging threats in the digital age.
Studying the secret world of spycraft
Hero Image
Ernestine Fu Mak (far left) and Steve Bowsher (far right) speaking with panelists during a session of the "Silicon Valley & The U.S. Government" speaker series.
Session leaders Ernestine Fu Mak (far left) and Steve Bowsher (far right) speaking with panelists during the "Silicon Valley & The U.S. Government" speaker series.
All News button
1
Subtitle

Recordings of the course “Silicon Valley & The U.S. Government,” co-led by instructors from FSI’s Gordian Knot Center for National Security Innovation and the Civil & Environmental Engineering Department, are available online for free.

Date Label
News Type
News
Date
Paragraphs

In an exciting development, the Industry-Wide Deliberative Forum convened by Stanford University’s Deliberative Democracy Lab is announcing the addition of two new companies — DoorDash and Microsoft — joining the group of technology companies Cohere, Meta, Oracle, and PayPal, advised by the Collective Intelligence Project in a collaborative effort to engage the public in shaping the future of AI agents. 

There is a gap between the development of technology, particularly AI, and the public's understanding of these advancements. This Forum is answering the question: what if the public could be more than just passive users of these technologies, but instead take an active role in shaping their progress? This growing group of technology companies is excited to engage in a collaborative approach to consulting the public on these complex issues. 

The inclusion of DoorDash and Microsoft speaks to the importance of this Forum and of engaging the public on the future of AI agents. "We believe the future of AI agents must be shaped thoughtfully, with meaningful public input. This forum provides an important platform to elevate diverse voices and guide the responsible development of AI that all businesses can benefit from,” said Chris Roberts, Director of Community Policy and Safety, at DoorDash

“We’re proud to support and participate in this effort.”

The Industry-Wide Deliberative Forum is set to take place in Fall 2025 and will be conducted on the AI-assisted Stanford Online Deliberation Platform. This Forum is rooted in deliberation, which involves bringing together representative samples of the public, presenting them with options and their associated tradeoffs, and encouraging them to reflect on both this education and their personal experiences. Research has shown that deliberative methods yield more thoughtful feedback for decision-makers, as individuals must consider the complexities of the issues at hand, rather than simply top-of-mind reactions.

“Microsoft is excited to join this cross-industry collaborative effort to better understand public perspectives on how to build the next generation of trustworthy AI systems,” Amanda Craig, Senior Director of Public Policy, Office of Responsible AI, Microsoft

The collaboration encourages thoughtful feedback rather than reactive opinions, ensuring that the public’s perspective is both informed and actionable. “Welcoming DoorDash and Microsoft to our collaborative table is an excellent opportunity to broaden the impact of our work,” said James Fishkin, Director of Stanford’s Deliberative Democracy Lab. “This expansion embodies a shared commitment to collectively shaping our future with AI through public consultations that are both representative and thoughtful.”

Media Contact: Alice Siu, Stanford Deliberative Democracy Lab

Read More

Agentic AI Workflow Automation, Artificial intelligence AI driven decision-making concept illustration blue background
News

Deliberative Democracy and the Ethical Challenges of Generative AI

CDDRL Research-in-Brief [4-minute read]
Deliberative Democracy and the Ethical Challenges of Generative AI
America in One Room: Pennsylvania
News

Pennsylvania Voters Bridge Deep Political Divides, Reduce Polarization in Groundbreaking Deliberative Polling® Event

America in One Room: Pennsylvania brings together a representative sample of registered Pennsylvania voters for a statewide Deliberative Poll in this crucial swing state, revealing surprising common ground and public opinion shifts on issues from immigration to healthcare to democratic reform.
Pennsylvania Voters Bridge Deep Political Divides, Reduce Polarization in Groundbreaking Deliberative Polling® Event
Futuristic 3D Render
News

Industry-Wide Deliberative Forum Invites Public to Weigh In on the Future of AI Agents

There is a significant gap between what technology, especially AI technology, is being developed and the public's understanding of such technologies. We must ask: what if the public were not just passive recipients of these technologies, but active participants in guiding their evolution?
Industry-Wide Deliberative Forum Invites Public to Weigh In on the Future of AI Agents
Hero Image
Close-up of a computer chip labeled ‘AI Artificial Intelligence,’ embedded in a circuit board with gold connectors and electronic components. BoliviaInteligente via Unsplash
All News button
1
Subtitle

The inclusion of these companies in the Industry-Wide Deliberative Forum, convened by Stanford University’s Deliberative Democracy Lab, speaks to its importance and the need to engage the public on the future of AI agents.

Date Label
0
CDDRL Honors Student, 2025-26
img_1259_3_-_emma_wang.jpg

Major: Political Science
Hometown: Naperville, Illinois
Thesis Advisor: Jonathan Rodden

Tentative Thesis Title: Broadband for All: Historical Lessons and International Models for U.S. Internet Policy

Future aspirations post-Stanford: After completing my master's in computer science, I hope to go to law school and work in technology law.

A fun fact about yourself: I started lion dancing when I came to college!

Date Label

Encina Hall, C151
616 Jane Stanford Way
Stanford, CA 94305-6055

0
Associate Professor, Josef Korbel School of International Studies at the University of Denver
CDDRL Visiting Scholar, 2025-26
20250506-kaplano-487_-_oliver_kaplan.jpg

Oliver Kaplan is an Associate Professor at the Josef Korbel School of International Studies at the University of Denver. He is the author of the book, Resisting War: How Communities Protect Themselves (Cambridge University Press, 2017), which examines how civilian communities organize to protect themselves from wartime violence. He is a co-editor and contributor to the book, Speaking Science to Power: Responsible Researchers and Policymaking (Oxford University Press, 2024). Kaplan has also published articles on the conflict-related effects of land reforms and ex-combatant reintegration and recidivism. As part of his research, Kaplan has conducted fieldwork in Colombia and the Philippines.

Kaplan was a Jennings Randolph Senior Fellow at the U.S. Institute of Peace and previously a postdoctoral Research Associate at Princeton University and at Stanford University. His research has been funded by the Carnegie Corporation of New York, the International Committee of the Red Cross, the Smith Richardson Foundation, and other grants. His work has been published in The Journal of Conflict Resolution, Journal of Peace Research, Conflict Management and Peace Science, Stability, The New York Times, Foreign Affairs, Foreign Policy, CNN, and National Interest.

At the University of Denver, Kaplan is Director of the Korbel Asylum Project (KAP). He has taught M.A.-level courses on Human Rights and Foreign Policy, Peacebuilding in Civil Wars, Civilian Protection, and Human Rights Research Methods, and PhD-level courses on Social Science Research Methods. Kaplan received his Ph.D. in political science from Stanford University and completed his B.A. at UC San Diego.

Date Label
News Type
News
Date
Paragraphs

There is a significant gap between what technology, especially AI technology, is being developed and the public's understanding of such technologies. We must ask: what if the public were not just passive recipients of these technologies, but active participants in guiding their evolution?

A group of technology companies convened by Stanford University’s Deliberative Democracy Lab will gather public feedback about complex questions the AI industry is considering while developing AI agents. This convening includes Cohere, Meta, Oracle, and PayPal, advised by the Collective Intelligence Project.

This Industry-Wide Forum brings together everyday people to weigh in on tech policy and product development decisions where there are difficult tradeoffs with no simple answers. Technology development is moving so quickly, there is no better time than right now to engage the public in understanding what an informed public would like AI technologies to do for them. This Forum is designed based on Stanford's method of Deliberative Polling, a governance innovation that empowers the public’s voices to have a greater say in decision-making. This Forum will take place in Fall 2025. Findings from this Forum will be made public, and Stanford’s Deliberative Democracy Lab will hold webinars for the public to learn and inquire about the findings.

"We're proud to be a founding participant in this initiative alongside Stanford and other AI leaders," said Saurabh Baji, CTO of Cohere. "This collaborative approach is central to enhancing trust in agentic AI and paving the way for strengthened cross-industry standards for this technology. We're looking forward to working together to shape the future of how agents serve enterprises and people."

In the near term, AI Agents will be expected to conduct a myriad of transactions on behalf of users, opening up considerable opportunities to offer great value as well as significant risks. This Forum will improve product market fit by giving companies foresight into what users want from AI Agents; it will help build trust and legitimacy with users; and it will strengthen cross-industry relations in support of industry standards development over time.

"We support The Forum for its deliberative and collaborative approach to shaping public discourse around AI agents," said Prakhar Mehrotra, SVP of AI at PayPal. "Responsibility and trust are core business principles for PayPal, and through collaborative efforts like these, we seek to encourage valuable perspectives that can help shape the future of agentic commerce."

The Forum will be conducted on the AI-assisted Stanford Online Deliberation Platform, a collaboration between Stanford’s Deliberative Democracy Lab and Crowdsourced Democracy Team, where a cross-section of the public will deliberate in small groups and share their perspectives, their lived experiences, and their expectations for AI products. This deliberation platform has hosted Meta’s Community Forums over the past few years. The Forum will also incorporate insights from CIP's Global Dialogues, conducted on the Remesh platform.

“Community Forums provide us with people’s considered feedback, which helps inform how we innovate,” said Rob Sherman, Meta’s Vice President, AI Policy & Deputy Chief Privacy Officer. “We look forward to the insights from this cross-industry partnership, which will provide a deeper understanding of people’s views on cutting-edge technology.”

This methodology is rooted in deliberation, which provides representative samples of the public with baseline education on a topic, including options with associated tradeoffs, and asks them to reflect on that education as well as their lived experience. Deliberative methods have been found to offer more considered feedback to decision-makers because people have to weigh the complexity of an issue rather than offering a knee-jerk reaction.

"This industry-wide deliberative forum represents a crucial step in democratizing the discourse around AI agents, ensuring that the public's voice is heard in a representative and thoughtful way as we collectively shape the future of this transformative technology," said James Fishkin, Director of Stanford's Deliberative Democracy Lab.

This Industry-Wide Forum represents a pivotal step in responsible AI development, bringing together technology companies and the public to address complex challenges in AI agent creation. By leveraging Stanford's Deliberative Polling methodology and making findings publicly available, the initiative promises to shape the future of AI with enhanced transparency, trust, and user-centric focus. Find out more about Stanford’s Deliberative Democracy Lab at deliberation.stanford.edu.

Media Contact: Alice Siu, Stanford Deliberative Democracy Lab

Read More

Back view of crop anonymous female talking to a chatbot of computer while sitting at home
News

Meta and Stanford’s Deliberative Democracy Lab Release Results from Second Community Forum on Generative AI

Participants deliberated on ‘how should AI agents provide proactive, personalized experiences for users?’ and ‘how should AI agents and users interact?’
Meta and Stanford’s Deliberative Democracy Lab Release Results from Second Community Forum on Generative AI
Chatbot powered by AI. Transforming Industries and customer service. Yellow chatbot icon over smart phone in action. Modern 3D render
News

Navigating the Future of AI: Insights from the Second Meta Community Forum

A multinational Deliberative Poll unveils the global public's nuanced views on AI chatbots and their integration into society.
Navigating the Future of AI: Insights from the Second Meta Community Forum
Collage of modern adults using smart phones in city with wifi signals
News

Results of First Global Deliberative Poll® Announced by Stanford’s Deliberative Democracy Lab

More than 6,300 deliberators from 32 countries and nine regions around the world participated in the Metaverse Community Forum on Bullying and Harassment.
Results of First Global Deliberative Poll® Announced by Stanford’s Deliberative Democracy Lab
Hero Image
Futuristic 3D Render Steve Johnson via Unsplash
All News button
1
Subtitle

There is a significant gap between what technology, especially AI technology, is being developed and the public's understanding of such technologies. We must ask: what if the public were not just passive recipients of these technologies, but active participants in guiding their evolution?

Date Label
Authors
News Type
News
Date
Paragraphs

Ever since the public release of ChatGPT in the fall of 2022, classrooms everywhere from grade schools to universities have started to adapt to a new reality of AI-augmented education. 

As with any new technology, the integration of AI into teaching practices has come with plenty of questions: Will this help or hurt learning outcomes? Are we grading students  or an algorithm? And, perhaps most fundamentally: To allow, or not to allow AI in the classroom? That is the question keeping many teachers up at night. 

For the instructors of “Technology, Innovation, and Great Power Competition,” a class created and taught by Stanford faculty and staff at the Gordian Knot Center for National Security Innovation (GKC), the answer to that question was obvious. Not only did they allow students to use AI in their coursework, they required it.
 

Leveraging AI for Policy Analysis


Taught by Steve BlankJoe Felter, and Eric Volmar of the Gordian Knot Center, the class was a natural forum to discuss how emerging technologies will affect relations between the world’s most powerful countries. 

Volmar, who returned to Stanford after serving in the U.S. Department of Defense, explains the logic behind requiring the use of AI:

“As we were designing this curriculum, we started from an acknowledgement that the world has changed. The AI models we see now are the worst they’re ever going to be. Everything is going to get better and become more and more integrated into our lives. So why not use every tool at our disposal to prepare students for that?”

For students used to restrictions or outright bans on using AI to complete coursework, being graded on using AI took some getting used to.

“This was the first class that I’ve had where using AI was mandatory,” said Jackson Painter, an MA student in Management Science and Engineering. “I've had classes where AI was allowed, but you had to cite or explain exactly how you used it. But being expected to use AI every week as part of the assignments was something new and pretty surprising.” 

Dr. Eric Volmar teaching the new Stanford Gordian Knot Center course Entrepreneurship Inside Government.
Dr. Eric Volmar teaching the new Stanford Gordian Knot Center course Entrepreneurship Inside Government.

Assigned into teams of three or four, students were given an area of strategic competition to focus on for the duration of the class, such as computing power, semiconductors, AI/machine learning, autonomy, space, and cyber security. In addition to readings, each group was required to conduct interviews with key stakeholders, with the end goal of producing a memo outlining specific policy-relevant insights about their area of focus.

But the final project was only part of the grade. The instructors also evaluated each group based on how they had used AI to form their analysis, organize information, and generate insights.

“This is not about replacing true expertise in policymaking, but it’s changing the nature of how you do it,” Volmar emphasized.
 

Expanding Students’ Capabilities


For the students, finding a balance between familiar habits and using a novel technology took some practice. 

“Before being in this class, I barely used ChatGPT. I was definitely someone who preferred writing in my own style,” said Helen Philips, an MA student in International Policy and course assistant for the class.

“This completely expanded my understanding of what AI is possible,” Philips continued. “It really opened up my mind to how beneficial AI can be for a broad spectrum of work products.”

After some initial coaching on how to develop effective prompts for the AI tools, students started iterating on their own. Using the models to summarize and synthesize large volumes of content was a first step. Then groups started getting creative. Some used AI to create maps of the many stakeholders involved in their project, then identify areas of overlap and connection between key players. Others used the tools to create simulated interviews with experts, then use the results to better prepare for actual interviews.
 


This is a new type of policy work. It's not replacing expertise, but it's changing the nature of how you access it. These tools increase the depth and breadth students can take in. It's an extraordinary thing.
Eric Volmar
GKC Associate Director


For Jackson Painter, the class provided valuable practice combining more traditional techniques for developing policy with new technology.

“I really came to see how irreplaceable the interviewing process is and the value of talking to actual people,” said Jackson. “People know the little nuances that the AI misses. But then when you can combine those nuances with all the information the AI can synthesize, that’s where it has its greatest value. It’s about augmenting, not replacing, your work.”

That kind of synthesis is what the course instructors hope students take away from the class. The aim, explained Volmar, is that they will put it into practice as future leaders facing complex challenges that touch multiple sectors of government, security, and society.

“This is a new type of policy work,” he said. “It's accelerated, and it increases the depth and breadth students can take in. They can move across many different areas and combine technical research with Senate and House Floor hearings. They can take something from Silicon Valley and combine it with something from Washington. It's an extraordinary thing.”

Real-time Innovation


For instructors Blank, Felter, and Volmar, classes like “Technology, Innovation, and Great Power Competition” — or sister classes like the highly popular “Hacking for Defense,” and the recently launched “Entrepreneurship Inside Government” — are an integral part of preparing students to navigate ever more complex technological and policy landscapes.

“We want America to continue to be a force for good in the world. And we're going to need to be competitive across all these domains to do that. And to be competitive, we have to bring our A-game and empower creative thinking as much as possible. If we don't take advantage of these technologies, we’re going to lose that advantage,” Felter stressed.

Applying real-time innovation to the challenges of national security and defense is the driving force behind the Gordian Knot Center. Founded in fall of 2021 by Joe Felter and Steve Blank with support from  principal investigators Michael McFaul and Riita Katila, the center brings together Stanford's cutting-edge resources, Silicon Valley's dynamic innovation ecosystem, and a network of national security experts to prepare the next generation of leaders.

To achieve that, Blank leveraged his background as a successful entrepreneur and creator of the lean startup movement, a methodology for launching companies that emphasizes experimentation, customer feedback, and iterative design over more traditional methods based on complex planning, intuition, and “big design up front” development.

“When I first taught at Stanford in 2011, I observed that the teaching being done about how to write a business plan in capstone entrepreneurship classes didn’t match the hands-on chaos of an actual startup. There were no entrepreneurship classes that combined experiential learning with methodology. But the goal was to teach both theory and practice.”
 


What we’re seeing in these classes are students who may not have otherwise thought they have a place at the table of national security. That's what we want, because the best future policymakers will understand how to leverage diverse skills and tools to meet challenges.
Joe Felter
GKC Center Director


That goal of combining theory and practice is a throughline that continues in today’s Gordian Knot Center. After the success of Blank’s entrepreneurship classes, he — alongside Pete Newell of BMNT and Joe Felter, a veteran, former senior Department of Defense official, and the current center director of the GKC — turned the principles of entrepreneurship and iteration toward government.

“We realized that university students had little connection or exposure to the problems that government was trying to solve, or the larger issues civil society was grappling with,” says Blank. “But with the right framework, students could learn directly about the nation's threats and security challenges, while innovators inside the government could see how students can rapidly iterate and deliver timely solutions to defense challenges.”

That thought led directly to the development of the “Hacking for Defense” class, now in its tenth year, and eventually to the organization of the Gordian Knot Center and its affiliate programs like the Stanford DEFCON Student Network. Based at the Freeman Spogli Institute for International Studies, the center today is a growing hub of students, veterans, alumni, industry experts, and government officials from a multiplicity of backgrounds and areas of expertise working across campus and across government to solve real problems and enact change.

Condoleezza Rice, Director of the Hoover Institution, speaking in Hacking for Defense.
Condoleezza Rice, Director of the Hoover Institution, speaking in Hacking for Defense.

Prepared for Diverse Challenges


In the classroom, the feedback cycle between real policy issues and iterative entrepreneurship remains central to the student experience. And it’s an approach that resonates with students.  

“I love the fact that we’re addressing real issues in real time,” says Nuri Capanoglu, a masters student in Management Science and Engineering who took “Technology, Innovation, and Great Power Competition” in fall 2024.

He continues, “Being able to use ChatGPT in a class like this was like having a fifth teammate we could bounce ideas off, double check things, and assign to do complex literature reviews that wouldn't have been possible on our own. It's like we went from being a team of four to a team of fifty.”

Other students agree. Feedback on the class has praised the “fusion of practical hand-on learning and AI-enabled research” and deemed it a “must-take for anyone, regardless of background.”

Like many of his peers, Capanoglu is eager for more. “As I’ve been planning my future schedule, I’ve tried to find more classes like this,” he says.

For instructors like Felter and Volmar, they are equally ready to welcome more students into their courses.

“Policy is so complex now, and the stakes are so high,” acknowledged Felter. “But what we’re seeing in these classes is a passion for addressing real challenges from students who may not have otherwise thought they have a place at the table of national security or policy. That’s what we want. The best and brightest future policymakers are going to have diverse skill sets and understand how to leverage every possible tool and capability available to meet those challenges. So if you want to get involved and make a difference, come take a policy class.”

Read More

A collage of group photo from the capstone internship projects from the Ford Dorsey Master's in International Policy Class of 2025.
Blogs

Globe Trotting MIP Students Aim for Policy Impact

Students from the Ford Dorsey Master's in International Policy Class of 2025 visited organizations around the world to tackle pressing policy challenges such as human trafficking, cyber threats, disinformation, and more.
Globe Trotting MIP Students Aim for Policy Impact
Students on team one present their project to the class
News

Stanford Students Pitch Solutions to U.S. National Security Challenges to Government Officials and Technology Experts

In the class “Technology, Innovation, and Great Power Competition,” students across disciplines work in teams and propose their detailed solutions to active stakeholders in the technology and national security sectors.
Stanford Students Pitch Solutions to U.S. National Security Challenges to Government Officials and Technology Experts
Deputy Secretary of Defense Kathleen Hicks and her team meet at the Hoover Institution with students and faculty from the Gordian Knot Center.
News

Deputy Secretary of Defense Kathleen Hicks Discusses Importance of Strategic Partnerships with Stanford Faculty and Students

A visit from the Department of Defense’s deputy secretary gave the Gordian Knot Center a prime opportunity to showcase how its faculty and students are working to build an innovative workforce that can help solve the nation’s most pressing national security challenges.
Deputy Secretary of Defense Kathleen Hicks Discusses Importance of Strategic Partnerships with Stanford Faculty and Students
Hero Image
Students from Gordian Knot Center classes at the White House with NSC Senior Director for Technology and National Security Tarun Chhabra in Washington D.C.
Technology, Innovation and Great Power Competition course teammates Nuri Capanoglu, Elena Kopstein, Mandy Alevra, and Jackson Painter with National Security Council Senior Director for Technology and National Security Tarun Chhabra in Washington, DC.
All News button
1
Subtitle

In classes taught through the Freeman Spogli Institute’s Gordian Knot Center, artificial intelligence is taking a front and center role in helping students find innovative solutions to global policy issues.

Date Label
Paragraphs

In an era marked by rapid technological advancements, increasing political polarization, and democratic backsliding, reimagining democracy requires innovative approaches that foster meaningful public engagement. Over the last 30 years, Deliberative Polling has proven to be a successful method of public consultation to enhance civic participation and informed decision-making. In recent years, the implementation of online Deliberative Polling using the AI-assisted Stanford Online Deliberation Platform, a groundbreaking automated platform designed to scale simultaneous and synchronous deliberation efforts to millions, has put deliberative societies within reach. By examining two compelling case studies—Foreign Policy by Canadians and the Metaverse Community Forum—this paper highlights how technology can empower diverse voices, facilitate constructive dialogue, and cultivate a more vibrant democratic process. This paper demonstrates that leveraging technology in deliberation not only enhances public discourse but also paves the way for a more inclusive and participatory democracy.
 

About "Deliberative Approaches to Inclusive Governance: An Essay Series Part of the Democratic Legitimacy for AI Initiative"


Democracy has undergone profound changes over the past decade, shaped by rapid technological, social, and political transformations. Across the globe, citizens are demanding more meaningful and sustained engagement in governance—especially around emerging technologies like artificial intelligence (AI), which increasingly shape the contours of public life.

From world-leading experts in deliberative democracy, civic technology, and AI governance we introduce a seven-part essay series exploring how deliberative democratic processes like citizen’s assemblies and civic tech can strengthen AI governance. The essays follow from a workshop on “Democratic Legitimacy for AI: Deliberative Approaches to Inclusive Governance” held in Vancouver in March 2025, in partnership with Simon Fraser University’s Morris J. Wosk Centre for Dialogue. The series and workshop were generously supported by funding from the Canadian Institute for Advanced Research (CIFAR), Mila, and Simon Fraser University’s Morris J. Wosk Centre for Dialogue

All Publications button
1
Publication Type
Book Chapters
Publication Date
Subtitle

Part of "Deliberative Approaches to Inclusive Governance: An Essay Series Part of the Democratic Legitimacy for AI Initiative," produced by the Centre for Media, Technology and Democracy.

Authors
Alice Siu
Book Publisher
Centre for Media, Technology and Democracy
Paragraphs

In October 2024, Meta, in collaboration with the Stanford Deliberative Democracy Lab, implemented the third Meta Community Forum. This Community Forum expanded on the October 2023 deliberations regarding Generative AI. For this Community Forum, the participants deliberated on ‘how should AI agents provide proactive, personalized experiences for users?’ and ‘how should AI agents and users interact?’ Since the last Community Forum, the development of Generative AI has moved beyond AI chatbots and users have begun to explore the use of AI agents — a type of AI that can respond to written or verbal prompts by performing actions for you, or on your behalf. And beyond text-generating AI, users have begun to explore multimodal AI, where tools are able to generate content images, videos, and audio as well. The growing landscape of Generative AI raises more questions about users’ preferences when it comes to interacting with AI agents. This Community Forum focused deliberations on how interactive and proactive AI agents should be when engaging with users. Participants considered a variety of tradeoffs regarding consent, transparency, and human-like behaviors of AI agents. These deliberations shed light on what users are thinking now amidst the changing technology landscape in Generative AI.

For this deliberation, nearly 900 participants from five countries: India, Nigeria, Saudi Arabia, South Africa, and Turkey, participated in this deliberative event. The samples of each of these countries were recruited independently, so this Community Forum should be seen as five independent deliberations. In addition, 1,033 persons participated in the control group, where the participants did not deliberate in any discussions; the control group only completed the two surveys, pre and post. The main purpose of the control group is to demonstrate that any changes that occur after deliberation are a result of the deliberative event.

All Publications button
1
Publication Type
Reports
Publication Date
Subtitle

April 2025

Authors
James S. Fishkin
Alice Siu
Authors
Thomas Fingar
News Type
Commentary
Date
Paragraphs

The following commentary appeared in the journal Issues in Science and Technology (Vol. XLI, No. 3, Spring 2025). It is part of a multi-author discussion of the article Reconsidering Research Security, by John C. Gannon, Richard A. Meserve, and Maria T. Zuber, published by the journal in its Winter 2025 issue.



Congress mandated the creation of the National Academies Roundtable on Science, Technology, and Security to address concern that untoward and illicit actions by China and other countries posed serious risks to American security and economic preeminence. John C. Gannon, Richard A. Meserve, and Maria T. Zuber, who cochaired the roundtable, correctly conclude that zealous measures to defend against foreign exploitation of university-based research would be inadequate to preserve US preeminence in science and technology (S&T) without much greater effort to strengthen US capabilities.

I was privileged to serve as a member of the roundtable and am both heartened and deeply disturbed by what we learned. As their article’s summary of key observations makes clear, the magnitude and efficacy of untoward foreign government actions to exploit American university-based research are less than feared, awareness and understanding of the problem in academic institutions and federal funding agencies have improved greatly, and steps underway to ameliorate the problem without seriously damaging the efficacy of open research appear promising. But as the authors also make clear, illicit foreign actions to exploit American S&T are neither the only nor most serious threats to sustained US preeminence and the security and competitive advantages it provides. We have a “Pogo problem.”

Some of the adopted and proposed measures to prevent foreign exploitation will make the internal weaknesses greater and accelerate the relative and absolute decline of US capacity.

The comic strip character Pogo once famously said, “We have met the enemy and he is us.” The roundtable pulled together findings from multiple studies that revealed serious and worsening internal threats to US S&T capacity and preeminence. They also demonstrated that some of the adopted and proposed measures to prevent foreign exploitation will make the internal weaknesses greater and accelerate the relative and absolute decline of US capacity.

For example, as the authors correctly emphasized, the nation needs to give immediate and serious attention to factors that limit the ability of secondary schools to interest and educate young people in science, technology, engineering, and mathematics. We do not now graduate enough seniors interested in STEM fields to fill university classes or existing corporate demand for scientists and technicians. Our broken immigration system compounds the problem in ways that reduce domestic capacity and shift commercial application of discoveries to other countries with better-prepared workforces. The already serious problems are further compounded by research security demands that effectively drive smaller research universities out of the game by making it too costly to utilize available talent or compete for federal grants.

As a member of the roundtable, I fully endorse the conclusions of our cochairs and their call for approaches that emphasize maintaining and improving US STEM capacity more than limited utility efforts to build perfect defenses against exaggerated foreign threats. We must revitalize and adapt the policies that made the United States preeminent if we are to regain and retain that status.

Read More

Oriana Skylar Mastro Testifying
News

Shared Threats: Indo-Pacific Alliances and Burden Sharing in Today's Geopolitical Environment

WATCH | Center Fellow Oriana Skylar Mastro testifies before the Senate Foreign Relations Committee
Shared Threats: Indo-Pacific Alliances and Burden Sharing in Today's Geopolitical Environment
Oksenberg Symposium panelists (L to R) Jean C Oi, Alex Gabuev, Sumit Ganguly, Da Wei, Michael McFaul
News

Oksenberg Symposium Panelists Analyze Evolving Strategic Dynamics Between China, Russia, India, and the United States

APARC's 2025 Oksenberg Symposium explored how shifting political, economic, and social conditions in China, Russia, India, and the United States are reshaping their strategies and relationships. The discussion highlighted key issues such as military and economic disparities, the shifting balance of power, and the implications of these changes for global stability, especially in the Indo-Pacific region.
Oksenberg Symposium Panelists Analyze Evolving Strategic Dynamics Between China, Russia, India, and the United States
APARC Senior Fellow Michel Oksenberg meets with Chinese paramount leader Deng Xiaoping
Q&As

Honoring Jimmy Carter: When Chinese Students Arrived in the US After the Cultural Revolution — with Thomas Fingar

It became clear, certainly by 1978, that educational exchanges, access to training, and export controls — these were going to be litmus tests of U.S.-China relations.
Honoring Jimmy Carter: When Chinese Students Arrived in the US After the Cultural Revolution — with Thomas Fingar
Hero Image
American flag and network imagery
All News button
1
Subtitle

Zealous measures to defend against foreign exploitation of university-based research would be inadequate to preserve US preeminence in science and technology without much greater effort to strengthen US capabilities.

Date Label
Subscribe to Science and Technology