Technology
Paragraphs

Recent reporting on Meta’s internal AI guidelines serves as a stark reminder that the rules governing AI behaviors are frequently decided by a small group of the same people, behind closed doors. The sheer scale of work every AI company grapples with, from determining ethics and mapping acceptable behaviors to enforcing content policies, affects millions of people through processes that the public has no visibility into.

The truth is that these silos are constantly happening across the industry.

Tech policy, particularly AI policy, is often so complex and evolves so rapidly that everyday perspectives are not easily captured. As consumers, we’ve grown accustomed to a system where the most important decisions about technology governance happen in exclusive settings.

But what if we flipped the script? What if users helped create the rules?

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
Tech Policy Press
Authors
Alice Siu
0
CDDRL Honors Student, 2025-26
img_1259_3_-_emma_wang.jpg

Major: Political Science
Hometown: Naperville, Illinois
Thesis Advisor: Jonathan Rodden

Tentative Thesis Title: Broadband for All: Historical Lessons and International Models for U.S. Internet Policy

Future aspirations post-Stanford: After completing my master's in computer science, I hope to go to law school and work in technology law.

A fun fact about yourself: I started lion dancing when I came to college!

Date Label
News Type
News
Date
Paragraphs

Introduction


Generative AI has become an incredibly attractive and widespread tool for people across the world. Alongside its rapid growth, AI tools present a host of ethical challenges relating to consent, security, and privacy, among others. As Generative AI has been spearheaded primarily by large technology companies, these ethical challenges — especially as viewed from the vantage point of ordinary people — risk being overlooked for the sake of market competition and profit. What is needed, therefore, is a deeper understanding of and attention to how ordinary people perceive AI, including its costs and benefits.

The Meta Community Forum Results Analysis, authored by Samuel Chang, James S. Fishkin, Ricky Hernandez Marquez, Ayushi Kadakia, Alice Siu, and Robert Taylor, aims to address some of these challenges. A partnership between CDDRL’s Deliberative Democracy Lab and Meta, the forum enables participants to learn about and collectively reflect on AI. The impulse behind deliberative democracy is straightforward: people affected by some policy or program should have the right to communicate about its contents and to understand the reasons for its adoption. As Generative AI and the companies that produce it become increasingly powerful, democratic input becomes even more essential to ensure their accountability. 

Motivation & Takeaways


In October 2024, the third Meta Community Forum took place. Its importance derives from the advancements in Generative AI since October 2023, when the last round of deliberations was held. One such advancement is the move beyond AI chatbots to AI agents, which can solve more complex tasks and adapt in real-time to improve responses. A second advancement is that AI has become multimodal, moving beyond the generation of text and into images, video, and audio. These advancements raise new questions and challenges. As such, the third forum provided participants with the opportunity to deliberate on a range of policy proposals, organized around two key themes: how AI agents should interact with users and how they should provide proactive and personalized experiences for them.

To summarize some of the forum’s core findings: the majority of participants value transparency and consent in their interactions with AI agents as well as the security and privacy of their data. In turn, they are less comfortable with agents autonomously completing tasks if this is not transparent to them. Participants have a positive outlook on AI agents but want to have control over their interactions. Regarding the deliberations themselves, participants rated the forum highly and felt that it exposed them to alternative perspectives. The deliberators wanted to learn more about AI for themselves, which was evidenced by their increased use of these tools after the deliberations. Future reports will explore the reasoning and arguments that they used while deliberating.
 


 

Image
Map of where participants hailed from.


The participants of this Community Forum were representative samples of the general population from five countries - Turkey, Saudi Arabia, India, Nigeria, and South Africa. Participants from each country deliberated separately in English, Hindi, Turkish, or Arabic.



Methodology & Data


The deliberations involved around 900 participants from five countries: India, Nigeria, Saudi Arabia, South Africa, and Turkey. Participants varied in terms of age, gender, education, and urbanicity. Because the deliberative groups were recruited independently, the forum can be seen as five independent deliberations. Deliberations alternated between small group discussions and ‘plenary sessions,’ where experts answered questions drawn from the small groups. There were around 1000 participants in the control group, who did pre- and post-surveys, but without deliberating. The participant sample was representative with respect to gender, while the treatment and control groups were balanced on demography as well as on their attitudes toward AI. Before deliberating on the proposals, participants were presented with background materials as well as a list of costs and benefits to consider.

In terms of the survey data, large majorities of participants had previously used AI. There was a statistically significant increase in these proportions after the forum. For example, in Turkey, usage rates increased from nearly 70% to 84%. In several countries, there were large increases in participants’ sense of AI’s positive benefits after deliberating, as well as a statistically significant increase in their interest. The deliberations changed participants’ opinions about a host of claims; for example, “people will feel less lonely with AI” and “more proactive [agents] are intrusive” lost approval whereas “AI agents’ capability to increase efficiency…is saving many companies a lot of time and resources” and “AI agents are helping people become more creative” gained approval. After deliberating, participants demonstrated an improved understanding of some factual aspects of AI, although the more technical aspects of this remain challenging. One example here is AI hallucinations, or rather, the generation of false or nonsensical outputs, usually because of flawed training data.
 


 

Image
Chart: How should AI agents remember users' past behaviors or preferences? Percentage in favor


Proposals


Participants deliberated on nineteen policy proposals. To summarize these briefly: In terms of whether and how AI remembers users’ past behaviors and preferences, participants preferred proposals that allowed users to make active choices, as opposed to this being a default setting or only being asked once. They also preferred being reminded about the ability of AI agents to personalize their experience, as well as agents being transparent with users about the tasks they complete. Participants preferred that users be educated on AI before using it, as well as being informed when AI is picking up on certain emotional cues and responding in “human-like” ways. They also preferred proposals whereby AI would ask clarifying questions before generating output. Finally, when it comes to agents helping users with real-life relationships, this was seen as more permissible when the other person was informed. Across the proposals, gender was neither a significant nor consistent determinant of how they were rated. Ultimately, the Meta Community Forum offers a model for how informed, public communication can shape AI and the ethical challenges it raises.

*Research-in-Brief prepared by Adam Fefer.

 
Hero Image
Agentic AI Workflow Automation, Artificial intelligence AI driven decision-making concept illustration blue background iStock / Getty Images
All News button
0
Subtitle

CDDRL Research-in-Brief [4-minute read]

Date Label
Paragraphs

In recent years, the previous bipolar nuclear order led by the United States and Russia has given way to a more volatile tripolar one, as China has quantitatively and qualitatively built up its nuclear arsenal. At the same time, there have been significant breakthroughs in the field of artificial intelligence (AI) technologies, including for military applications. As a result of these two trends, understanding the AI-nuclear nexus in the context of U.S.-China-Russia geopolitical competition is increasingly urgent.

There are various military use cases for AI, including classification models, analytic and predictive models, generative AI, and autonomy. Given that variety, it is necessary to examine the AI-nuclear nexus across three broad categories: nuclear command, control, and communications; structural elements of the nuclear balance; and entanglement of AI-enabled conventional systems with nuclear risks. While each of these categories has the potential to generate risk, this report argues that the degree of risk posed by a particular case depends on three major factors: the role of humans, the degree to which AI systems become a single point of failure, and the AI offense-defense balance.

Continue reading at cnas.org 

All Publications button
0
Publication Type
Reports
Publication Date
Subtitle

U.S.-China-Russia Rivalry at the Nexus of Nuclear Weapons and Artificial Intelligence

Paragraphs

Since the release of ChatGPT in November 2022, the breakneck pace of progress in artificial intelligence has made it nearly impossible for policymakers to keep up. But the AI revolution has only just begun. Today’s most powerful AI models, often referred to as “frontier AI,” can handle and generate images, audio, video, and computer code, in addition to natural language. Their remarkable performance has prompted ambitions among leading AI labs to achieve what is called “artificial general intelligence.” According to a growing number of experts, AGI systems equaling or surpassing humans across a wide range of cognitive tasks—the equivalent of millions of brilliant minds working tirelessly at the top of their fields at machine speed—may soon be capable of unlocking scientific discoveries, enhancing economic productivity, and tackling tough national security challenges. With advances once in the realm of science fiction now in the realm of possibility, the United States has no time to spare in crafting a coherent and truly global strategy.

Continue reading at foreignaffairs.com

All Publications button
0
Publication Type
Commentary
Publication Date
Subtitle

To Stay Ahead of China, Trump Must Build on Biden’s Work

Paragraphs

Discussions in Washington about artificial intelligence increasingly turn to how the United States can win the AI race with China. One of President Donald Trump’s first acts on returning to office was to sign an executive order declaring the need to “sustain and enhance America’s global AI dominance.” At the Paris AI Action Summit in February, Vice President JD Vance emphasized the administration’s commitment to ensuring that “American AI technology continues to be the gold standard worldwide.” And in May, David Sacks, Trump’s AI and crypto czar, cited the need “to win the AI race” to justify exporting advanced AI chips to the United Arab Emirates and Saudi Arabia.

Continue reading at foreignaffairs.com

All Publications button
0
Publication Type
Commentary
Publication Date
Subtitle

America Needs More Than Innovation to Compete With China

News Type
News
Date
Paragraphs

There is a significant gap between what technology, especially AI technology, is being developed and the public's understanding of such technologies. We must ask: what if the public were not just passive recipients of these technologies, but active participants in guiding their evolution?

A group of technology companies convened by Stanford University’s Deliberative Democracy Lab will gather public feedback about complex questions the AI industry is considering while developing AI agents. This convening includes Cohere, Meta, Oracle, and PayPal, advised by the Collective Intelligence Project.

This Industry-Wide Forum brings together everyday people to weigh in on tech policy and product development decisions where there are difficult tradeoffs with no simple answers. Technology development is moving so quickly, there is no better time than right now to engage the public in understanding what an informed public would like AI technologies to do for them. This Forum is designed based on Stanford's method of Deliberative Polling, a governance innovation that empowers the public’s voices to have a greater say in decision-making. This Forum will take place in Fall 2025. Findings from this Forum will be made public, and Stanford’s Deliberative Democracy Lab will hold webinars for the public to learn and inquire about the findings.

"We're proud to be a founding participant in this initiative alongside Stanford and other AI leaders," said Saurabh Baji, CTO of Cohere. "This collaborative approach is central to enhancing trust in agentic AI and paving the way for strengthened cross-industry standards for this technology. We're looking forward to working together to shape the future of how agents serve enterprises and people."

In the near term, AI Agents will be expected to conduct a myriad of transactions on behalf of users, opening up considerable opportunities to offer great value as well as significant risks. This Forum will improve product market fit by giving companies foresight into what users want from AI Agents; it will help build trust and legitimacy with users; and it will strengthen cross-industry relations in support of industry standards development over time.

"We support The Forum for its deliberative and collaborative approach to shaping public discourse around AI agents," said Prakhar Mehrotra, SVP of AI at PayPal. "Responsibility and trust are core business principles for PayPal, and through collaborative efforts like these, we seek to encourage valuable perspectives that can help shape the future of agentic commerce."

The Forum will be conducted on the AI-assisted Stanford Online Deliberation Platform, a collaboration between Stanford’s Deliberative Democracy Lab and Crowdsourced Democracy Team, where a cross-section of the public will deliberate in small groups and share their perspectives, their lived experiences, and their expectations for AI products. This deliberation platform has hosted Meta’s Community Forums over the past few years. The Forum will also incorporate insights from CIP's Global Dialogues, conducted on the Remesh platform.

“Community Forums provide us with people’s considered feedback, which helps inform how we innovate,” said Rob Sherman, Meta’s Vice President, AI Policy & Deputy Chief Privacy Officer. “We look forward to the insights from this cross-industry partnership, which will provide a deeper understanding of people’s views on cutting-edge technology.”

This methodology is rooted in deliberation, which provides representative samples of the public with baseline education on a topic, including options with associated tradeoffs, and asks them to reflect on that education as well as their lived experience. Deliberative methods have been found to offer more considered feedback to decision-makers because people have to weigh the complexity of an issue rather than offering a knee-jerk reaction.

"This industry-wide deliberative forum represents a crucial step in democratizing the discourse around AI agents, ensuring that the public's voice is heard in a representative and thoughtful way as we collectively shape the future of this transformative technology," said James Fishkin, Director of Stanford's Deliberative Democracy Lab.

This Industry-Wide Forum represents a pivotal step in responsible AI development, bringing together technology companies and the public to address complex challenges in AI agent creation. By leveraging Stanford's Deliberative Polling methodology and making findings publicly available, the initiative promises to shape the future of AI with enhanced transparency, trust, and user-centric focus. Find out more about Stanford’s Deliberative Democracy Lab at deliberation.stanford.edu.

Media Contact: Alice Siu, Stanford Deliberative Democracy Lab

Read More

Back view of crop anonymous female talking to a chatbot of computer while sitting at home
News

Meta and Stanford’s Deliberative Democracy Lab Release Results from Second Community Forum on Generative AI

Participants deliberated on ‘how should AI agents provide proactive, personalized experiences for users?’ and ‘how should AI agents and users interact?’
Meta and Stanford’s Deliberative Democracy Lab Release Results from Second Community Forum on Generative AI
Chatbot powered by AI. Transforming Industries and customer service. Yellow chatbot icon over smart phone in action. Modern 3D render
News

Navigating the Future of AI: Insights from the Second Meta Community Forum

A multinational Deliberative Poll unveils the global public's nuanced views on AI chatbots and their integration into society.
Navigating the Future of AI: Insights from the Second Meta Community Forum
Collage of modern adults using smart phones in city with wifi signals
News

Results of First Global Deliberative Poll® Announced by Stanford’s Deliberative Democracy Lab

More than 6,300 deliberators from 32 countries and nine regions around the world participated in the Metaverse Community Forum on Bullying and Harassment.
Results of First Global Deliberative Poll® Announced by Stanford’s Deliberative Democracy Lab
Hero Image
Futuristic 3D Render Steve Johnson via Unsplash
All News button
1
Subtitle

There is a significant gap between what technology, especially AI technology, is being developed and the public's understanding of such technologies. We must ask: what if the public were not just passive recipients of these technologies, but active participants in guiding their evolution?

Date Label
Paragraphs

In September 2022, National Security Adviser Jake Sullivan identified quantum technologies as one of three — biotech, clean energy (including batteries), and next-generation computing (including quantum and semiconductors)—that are critical to the economic and national security of the United States.1 By allowing for new methods of computation, sensing, and communications, quantum technologies have the potential to revolutionize not only commercial industries, such as financial services, chemical engineering, and energy (among others), but also national security capabilities, such as code breaking and remote sensing.

All Publications button
0
Publication Type
White Papers
Publication Date
Authors
Authors
News Type
News
Date
Paragraphs

Ever since the public release of ChatGPT in the fall of 2022, classrooms everywhere from grade schools to universities have started to adapt to a new reality of AI-augmented education. 

As with any new technology, the integration of AI into teaching practices has come with plenty of questions: Will this help or hurt learning outcomes? Are we grading students  or an algorithm? And, perhaps most fundamentally: To allow, or not to allow AI in the classroom? That is the question keeping many teachers up at night. 

For the instructors of “Technology, Innovation, and Great Power Competition,” a class created and taught by Stanford faculty and staff at the Gordian Knot Center for National Security Innovation (GKC), the answer to that question was obvious. Not only did they allow students to use AI in their coursework, they required it.
 

Leveraging AI for Policy Analysis


Taught by Steve BlankJoe Felter, and Eric Volmar of the Gordian Knot Center, the class was a natural forum to discuss how emerging technologies will affect relations between the world’s most powerful countries. 

Volmar, who returned to Stanford after serving in the U.S. Department of Defense, explains the logic behind requiring the use of AI:

“As we were designing this curriculum, we started from an acknowledgement that the world has changed. The AI models we see now are the worst they’re ever going to be. Everything is going to get better and become more and more integrated into our lives. So why not use every tool at our disposal to prepare students for that?”

For students used to restrictions or outright bans on using AI to complete coursework, being graded on using AI took some getting used to.

“This was the first class that I’ve had where using AI was mandatory,” said Jackson Painter, an MA student in Management Science and Engineering. “I've had classes where AI was allowed, but you had to cite or explain exactly how you used it. But being expected to use AI every week as part of the assignments was something new and pretty surprising.” 

Dr. Eric Volmar teaching the new Stanford Gordian Knot Center course Entrepreneurship Inside Government.
Dr. Eric Volmar teaching the new Stanford Gordian Knot Center course Entrepreneurship Inside Government.

Assigned into teams of three or four, students were given an area of strategic competition to focus on for the duration of the class, such as computing power, semiconductors, AI/machine learning, autonomy, space, and cyber security. In addition to readings, each group was required to conduct interviews with key stakeholders, with the end goal of producing a memo outlining specific policy-relevant insights about their area of focus.

But the final project was only part of the grade. The instructors also evaluated each group based on how they had used AI to form their analysis, organize information, and generate insights.

“This is not about replacing true expertise in policymaking, but it’s changing the nature of how you do it,” Volmar emphasized.
 

Expanding Students’ Capabilities


For the students, finding a balance between familiar habits and using a novel technology took some practice. 

“Before being in this class, I barely used ChatGPT. I was definitely someone who preferred writing in my own style,” said Helen Philips, an MA student in International Policy and course assistant for the class.

“This completely expanded my understanding of what AI is possible,” Philips continued. “It really opened up my mind to how beneficial AI can be for a broad spectrum of work products.”

After some initial coaching on how to develop effective prompts for the AI tools, students started iterating on their own. Using the models to summarize and synthesize large volumes of content was a first step. Then groups started getting creative. Some used AI to create maps of the many stakeholders involved in their project, then identify areas of overlap and connection between key players. Others used the tools to create simulated interviews with experts, then use the results to better prepare for actual interviews.
 


This is a new type of policy work. It's not replacing expertise, but it's changing the nature of how you access it. These tools increase the depth and breadth students can take in. It's an extraordinary thing.
Eric Volmar
GKC Associate Director


For Jackson Painter, the class provided valuable practice combining more traditional techniques for developing policy with new technology.

“I really came to see how irreplaceable the interviewing process is and the value of talking to actual people,” said Jackson. “People know the little nuances that the AI misses. But then when you can combine those nuances with all the information the AI can synthesize, that’s where it has its greatest value. It’s about augmenting, not replacing, your work.”

That kind of synthesis is what the course instructors hope students take away from the class. The aim, explained Volmar, is that they will put it into practice as future leaders facing complex challenges that touch multiple sectors of government, security, and society.

“This is a new type of policy work,” he said. “It's accelerated, and it increases the depth and breadth students can take in. They can move across many different areas and combine technical research with Senate and House Floor hearings. They can take something from Silicon Valley and combine it with something from Washington. It's an extraordinary thing.”

Real-time Innovation


For instructors Blank, Felter, and Volmar, classes like “Technology, Innovation, and Great Power Competition” — or sister classes like the highly popular “Hacking for Defense,” and the recently launched “Entrepreneurship Inside Government” — are an integral part of preparing students to navigate ever more complex technological and policy landscapes.

“We want America to continue to be a force for good in the world. And we're going to need to be competitive across all these domains to do that. And to be competitive, we have to bring our A-game and empower creative thinking as much as possible. If we don't take advantage of these technologies, we’re going to lose that advantage,” Felter stressed.

Applying real-time innovation to the challenges of national security and defense is the driving force behind the Gordian Knot Center. Founded in fall of 2021 by Joe Felter and Steve Blank with support from  principal investigators Michael McFaul and Riita Katila, the center brings together Stanford's cutting-edge resources, Silicon Valley's dynamic innovation ecosystem, and a network of national security experts to prepare the next generation of leaders.

To achieve that, Blank leveraged his background as a successful entrepreneur and creator of the lean startup movement, a methodology for launching companies that emphasizes experimentation, customer feedback, and iterative design over more traditional methods based on complex planning, intuition, and “big design up front” development.

“When I first taught at Stanford in 2011, I observed that the teaching being done about how to write a business plan in capstone entrepreneurship classes didn’t match the hands-on chaos of an actual startup. There were no entrepreneurship classes that combined experiential learning with methodology. But the goal was to teach both theory and practice.”
 


What we’re seeing in these classes are students who may not have otherwise thought they have a place at the table of national security. That's what we want, because the best future policymakers will understand how to leverage diverse skills and tools to meet challenges.
Joe Felter
GKC Center Director


That goal of combining theory and practice is a throughline that continues in today’s Gordian Knot Center. After the success of Blank’s entrepreneurship classes, he — alongside Pete Newell of BMNT and Joe Felter, a veteran, former senior Department of Defense official, and the current center director of the GKC — turned the principles of entrepreneurship and iteration toward government.

“We realized that university students had little connection or exposure to the problems that government was trying to solve, or the larger issues civil society was grappling with,” says Blank. “But with the right framework, students could learn directly about the nation's threats and security challenges, while innovators inside the government could see how students can rapidly iterate and deliver timely solutions to defense challenges.”

That thought led directly to the development of the “Hacking for Defense” class, now in its tenth year, and eventually to the organization of the Gordian Knot Center and its affiliate programs like the Stanford DEFCON Student Network. Based at the Freeman Spogli Institute for International Studies, the center today is a growing hub of students, veterans, alumni, industry experts, and government officials from a multiplicity of backgrounds and areas of expertise working across campus and across government to solve real problems and enact change.

Condoleezza Rice, Director of the Hoover Institution, speaking in Hacking for Defense.
Condoleezza Rice, Director of the Hoover Institution, speaking in Hacking for Defense.

Prepared for Diverse Challenges


In the classroom, the feedback cycle between real policy issues and iterative entrepreneurship remains central to the student experience. And it’s an approach that resonates with students.  

“I love the fact that we’re addressing real issues in real time,” says Nuri Capanoglu, a masters student in Management Science and Engineering who took “Technology, Innovation, and Great Power Competition” in fall 2024.

He continues, “Being able to use ChatGPT in a class like this was like having a fifth teammate we could bounce ideas off, double check things, and assign to do complex literature reviews that wouldn't have been possible on our own. It's like we went from being a team of four to a team of fifty.”

Other students agree. Feedback on the class has praised the “fusion of practical hand-on learning and AI-enabled research” and deemed it a “must-take for anyone, regardless of background.”

Like many of his peers, Capanoglu is eager for more. “As I’ve been planning my future schedule, I’ve tried to find more classes like this,” he says.

For instructors like Felter and Volmar, they are equally ready to welcome more students into their courses.

“Policy is so complex now, and the stakes are so high,” acknowledged Felter. “But what we’re seeing in these classes is a passion for addressing real challenges from students who may not have otherwise thought they have a place at the table of national security or policy. That’s what we want. The best and brightest future policymakers are going to have diverse skill sets and understand how to leverage every possible tool and capability available to meet those challenges. So if you want to get involved and make a difference, come take a policy class.”

Read More

A collage of group photo from the capstone internship projects from the Ford Dorsey Master's in International Policy Class of 2025.
Blogs

Globe Trotting MIP Students Aim for Policy Impact

Students from the Ford Dorsey Master's in International Policy Class of 2025 visited organizations around the world to tackle pressing policy challenges such as human trafficking, cyber threats, disinformation, and more.
Globe Trotting MIP Students Aim for Policy Impact
Students on team one present their project to the class
News

Stanford Students Pitch Solutions to U.S. National Security Challenges to Government Officials and Technology Experts

In the class “Technology, Innovation, and Great Power Competition,” students across disciplines work in teams and propose their detailed solutions to active stakeholders in the technology and national security sectors.
Stanford Students Pitch Solutions to U.S. National Security Challenges to Government Officials and Technology Experts
Deputy Secretary of Defense Kathleen Hicks and her team meet at the Hoover Institution with students and faculty from the Gordian Knot Center.
News

Deputy Secretary of Defense Kathleen Hicks Discusses Importance of Strategic Partnerships with Stanford Faculty and Students

A visit from the Department of Defense’s deputy secretary gave the Gordian Knot Center a prime opportunity to showcase how its faculty and students are working to build an innovative workforce that can help solve the nation’s most pressing national security challenges.
Deputy Secretary of Defense Kathleen Hicks Discusses Importance of Strategic Partnerships with Stanford Faculty and Students
Hero Image
Students from Gordian Knot Center classes at the White House with NSC Senior Director for Technology and National Security Tarun Chhabra in Washington D.C.
Technology, Innovation and Great Power Competition course teammates Nuri Capanoglu, Elena Kopstein, Mandy Alevra, and Jackson Painter with National Security Council Senior Director for Technology and National Security Tarun Chhabra in Washington, DC.
All News button
1
Subtitle

In classes taught through the Freeman Spogli Institute’s Gordian Knot Center, artificial intelligence is taking a front and center role in helping students find innovative solutions to global policy issues.

Date Label
Subscribe to Technology