Dawn Song: A Sociotechnical Approach to a Safe, Responsible AI Future
Transcript
I'm Dawn Song, professor at UC Berkeley. Today I'll talk about “sociotechnical approach for a safe and responsible AI future,” a path for science and evidence-based AI policy.
So as we all know, there's a broad spectrum of different AI risks, and it's really important in particular when we deploy AI to consider the presence of attackers for many reasons that I won't go through. And also many of us have signed the statement for the importance of mitigating the risk of extinction from AI.
So all this saying that, yes, we need to mitigate risks. However, it's important to mitigate risks while fostering innovation. And because of the importance of mitigating risks, different governments have all been trying their efforts with AI policies. And in particular, one thing I want to raise is that this year we are seeing a sudden proliferation of AI bills.
At the federal level, there are currently about 120 AI bills. Two years ago, there were almost no AI bill efforts. But now, this year, there's this sudden proliferation of AI bills. And at the state level, there are now about 600 of them. And unfortunately there is a huge fragmentation currently in the AI community in our approach to AI policy. And the AI community lacks consensus on the evidence-based relevance for effective policy-making, including what risks should be prioritized, if or when they will materialize, and who should be responsible for addressing these risks.
And unfortunately, we have seen such fragmentation like heated debates in SB 1047. Building a safe AI future needs a sustained socio-technical approach. Technical solution alone is great, but it's insufficient, and ad-hoc regulation leads to a lot of issues, suboptimal solutions, potential negative consequences, and even worse: lost opportunity to avert disastrous outcomes and also a fragmented community.
So the question is, what is a better path for a safer AI future? That is the proposal that we have made recently with a number of other leading AI researchers including Yejin who is here and Rishi who is here. This is a path for science- and evidence-based AI policy. And also, if you want to see the details, you can go to understanding-ai-safety.org.
The key point is that we believe that AI policy should be informed by scientific understanding of AI risks and how to successfully mitigate them. And as we all agree, the current scientific understanding is very limited. And hence, in this proposal, we propose five priorities to advance scientific understanding for AI risks, how to mitigate them, and also to advance science and evidence based AI policy.
So I have very little time left, so I won't be able to go through all these priorities. The first one is to understand AI risks better. This has two dimensions, one is: we want to have comprehensive understanding of the AI risks. And the second dimension is: we recommend the marginal risk framework for understanding AI risks. So, I won't actually go through these.
And we have some work actually applying this – trying to better understand the impact of frontier AI in the landscape of cybersecurity. And also these Marginal Risk Analysis results change depending on many factors, such as model capabilities, so we need to continuously do this Marginal Risk Analysis.
And the second one is transparency. We need to increase transparency on AI design and development. Currently, many developers do this voluntarily, but that's not sufficient. We actually want to have better regulation to standardize this transparency report and so on. And there are many open questions on how to do this the best.
And the third one is to develop early-warning detection mechanisms. This has two parts. The first part is in lab testing. How we can do better evals for dangerous capabilities and these different things. And also part two is post-deployment monitoring: how we can develop adverse event reporting systems. And there are many open questions there as well.
And then the fourth one is to develop mitigation defense mechanisms. This has two parts. The first is: better, new approaches for building safe AI with, for example, better guarantees. And then part two is that we need society to increase our security capabilities and to increase the overall society's resilience against attacks.
And then the fifth one is to build trust, reduce fragmentation.
And then we have a call to action. It's about building a blueprint of future AI policy on what's called a conditional response, where we can bring a community together to discuss: under what conditions, what responses should be made.
And so with that you can go to the website, and also we would appreciate it if you can help spread the word. You can go to my Twitter to spread the word. And also this afternoon we'll have a discussion group from 3:45 to 4:30.