New Report Unpacks Governance Strategies and Risk Analysis for Generative AI

New Report Unpacks Governance Strategies and Risk Analysis for Generative AI

Florence G'sell headshot

A recent Stanford Cyber Policy Center report analyzes and unpacks the labyrinth state of global AI regulations to draw common themes and lessons for policymakers and industry experts on what can and is being done to manage AI’s potential and actual risks.

By looking at policy options ‘on the table’ from across the world, the report is among the few studies to distill the worldwide AI policy conversation into one thorough and complete document.

Generative AI refers to algorithms that create realistic content such as images, text, music, and videos by learning from data patterns. The technology could lead to exponential increases in human productivity, discovery, and creativity.

The 460-page report, “Regulating Under Uncertainty: Governance Options for Generative AI,” underscores several critical challenges that policymakers face including whether to wait for the potential risks to manifest before regulating and the best way to treat open-source models. Surveying the global regulatory landscape, the report emphasizes three major policy approaches – the U.S. model of “encouraged self-regulation,” China’s command-and-control model, and the European Union’s AI Act, which exemplifies an approach of government regulation that includes elements of “co-regulation.”

The report’s author, Florence G’sell, Director of the Stanford Cyber Policy Center’s Program on Governance of Emerging Technologies, published the year-long analysis to serve as a go-to resource for policymakers, industry, experts and students alike.

A central theme of the report is “uncertainty” and how challenging the task of regulating AI is when so much remains unknown. Although hypothetical or difficult to assess, AI’s potential risks are concerning. They include inaccurate content, cybercrime, cyberattacks, biosecurity threats, bias and discrimination, privacy concerns, job displacement, and environment costs, among many others. 

“The most alarming use cases include military applications and the potential for generative AI tools to be used in the creation of bioweapons. Finally, even in the absence of misuse, generative AI tools may exert excessive influence on the humans who interact with them and may lead to overreliance on these systems,” the report noted.

In this context, “Regulation is both urgently needed and unpredictable,” the report states. “It also may be counterproductive, if not done well … The risks and benefits of AI will be felt across the entire world.”

Governance challenges, perspectives

Regulating AI effectively is necessary, but it is difficult for many reasons, as the report described. While OpenAI’s release of ChatGPT in 2022 sparked policy conversations globally about how to find a balanced AI governance approach that avoids the two more extreme approaches, the concern is that the technology is unfolding rapidly and outpacing policy constraints.

“Governmental policy and regulation have lagged behind the fast pace of technological development,” the study stated. “Nevertheless, a wealth of laws, both proposed and enacted, have emerged around the world. The purpose of this report is to canvas and analyze the existing array of proposals for governance of generative AI.”

Toward this, the document identified several high-level principles and observations for policymakers to consider on AI governance:

  • Regulating Technology versus Applications: Sector-specific laws enable incremental AI regulation, but the rise of general-purpose AI models complicates the prediction of future applications, suggesting a need to regulate the technology itself.
  • Transparency, auditing: Because the impacts of generative AI are still largely unknown, transparency in AI development is essential. More openness about how AI models are built and what data they use can help ensure accountability and facilitate effective regulation.
  • The significance of Enforcement: The rapid evolution and complexity of AI technology necessitate that enforcement may be as crucial, if not more so, than legislation. However, this requires governments to hire AI experts, which are in short supply and can be costly.
  • Balancing public and private sectors: Private companies are developing most generative AI models. Yet to protect the public interest, significant public investment might be required to support AI models beyond the influence of profit-making firms.
  • Benefits, risks of open models: Open-source models promise to make AI’s benefits available to a wider audience. However, once these models are released, they can be used and modified by bad actors for malicious purposes. Governments need to address these risks and benefits.

 

While the EU has decided to pass a comprehensive regulation without delay, the U.S. has generally favored a self-regulation approach to AI. However, after the release of ChatGPT in 2022, the Biden administration secured voluntary commitments from AI companies – an “encouraged self-regulation” approach – and issued Executive Order 14110 in 2023 to establish policy priorities for federal agencies on AI governance.

‘Proactive guardrails’

Stanford and industry experts discussed the report and AI governance in general at the Oct. 28 event, “A Conversation: Governance Options for Generative AI.” Panelists included G’sell; California Senator Scott Wiener; Gerard De Graaf, senior digital envoy at the European Union Office in San Francisco; Nathaniel Persily, Stanford law professor and co-director of the Stanford Cyber Policy Center; and Janel Thamkul, deputy general counsel at Anthropic. Technology reporter and former Stanford d.school lecturer Jacob Ward moderated the discussion.

G’sell, who delivered opening remarks, highlighted that policymakers must consider whether to await a scientific consensus on potential risks or a sufficiently detailed risk assessment before taking action.

“The question is what to do when there is some uncertainty and is there something different with AI than with previous technological evolution ... It’s a very difficult question, and so for regulators and policymakers, it’s a complicated dilemma where one should have various options,” she said.

While open-source AI models offer transparency and may democratize AI access globally, they also come with risks, as malicious actors can readily exploit and modify them for harmful purposes.

G’sell recently published a blog post examining California’s attempt earlier this year to establish AI governance – California bill SB 1047. While Gov. Gavin Newsom vetoed the legislation, he pointed to a need to find a balanced solution.

“The governor committed to working with legislators, academics, and other partners to ‘find the appropriate path forward’ … proactive guardrails should be implemented, and severe consequences for bad actors must be clear and enforceable,” G’sell wrote.

panelists sitting on stage Florence G'sell speaks on a panel with Janel Thamkul, Gerard de Graaf, Nate Persily and Scott Wiener

At the panel discussion, de Graaf noted that the U.S. faces a different type of challenge due to the inherent nature of U.S. federalism and state lawmaking. The EU, on the other hand, follows a more centralized approach that flows through its 27 member states.

Wiener, who introduced the California legislation, said it was designed to “reduce risks without hindering innovation.” He vowed to continue efforts on AI governance policy.

The Stanford Cyber Policy Center explores the interdisciplinary study of issues at the nexus of technology, governance and public policy.

Read More

Global Flags
Blogs

Governments Aren’t Yet Serious About AI’s Risk to Human Rights

In the rush to develop national strategies on artificial intelligence, a new report finds, most governments pay lip service to civil liberties.
cover link Governments Aren’t Yet Serious About AI’s Risk to Human Rights
News

The Consulate General of France and CPSL Convene for Content Moderation Seminar

In February the Consulate General of France, and the Content Policy & Society Lab convened a seminar to discuss online content moderation and the ways forward in 2022
cover link The Consulate General of France and CPSL Convene for Content Moderation Seminar