Mary Phuong – Dangerous Capability Evals: Basis for Frontier Safety

Transcript

Hi everyone. I'm going to talk about evaluations. And it's great to follow up on Helen's talk because I think of evaluations as the kind of work that technical researchers can do to help accelerate things going well in policy and governance. 


And specifically, I'm going to talk about how we perceive things from a lab perspective. So as I'm sure is on many people's minds, there have recently been some kind of indications or initial emerging hints of AI posing severe risks. So for example, here's a study by OpenAI showing that if you get two groups to plan a bio attack, you give one access to a language model, then the group that has access to a language model can plan a slightly better attack as judged by biological experts.


Another example comes from cybersecurity, where a group led by Daniel Kang, who's here, I believe, have shown that if you scaffold language models into agents they're able to autonomously hack websites and can also exploit real-world vulnerabilities, including zero-day vulnerabilities.


And perhaps more speculatively AI is quite plausibly going to make it more effective for fringe groups to radicalize or recruit new members, to run large scale social engineering campaigns. Or perhaps could be used by nation state groups for election interference or influence operations. 


And as I think many people here know, METR have been working on a threat model about an AI going rogue, acting autonomously without human oversight and kind of self-sustaining by making money, then making more copies of itself and self-improving in the wild. So what do we do about all these risks?


At DeepMind, we've recently published two reports trying to lay out our approach to anticipating and mitigating the risks. One is the Frontier Safety Framework, which describes seven threat models that we think are particularly likely and particularly tractable to address. These include things as I mentioned, autonomy risks, cybersecurity risks. We have two threat models addressing risks from accelerating biological threat creation, either enabling amateurs or enabling experts, and then finally ML R&D acceleration risks, where AI can replace an ML researcher and thereby enable a careless actor to go ahead.


We've also committed to run evaluations for all of these threat models that we're tracking. These are supposed to trigger some amount of time before we think the capabilities actually become exploitable or “critical”, as we say in our safety framework, and at the point where the evaluations trigger, we have committed to implement safety mitigations, including deployment mitigations or security mitigations.


So here is an overview of our dangerous capability evaluation suite. It consists of evaluations across the main threat models. And I'm going to present some of these. I think the main thing to take away from them is that they try to model the threat model, but they are definitely not at the threshold of where we think radical action is justified. And there's definitely more effort needed to develop evaluations that we think are “at threshold”, and that are actually informative and will be convincing to various stakeholders to trigger action.


So starting with cybersecurity evaluations, these rely on toy examples of vulnerabilities that have been designed mainly for entertainment and upskilling purposes, called Capture the Flag challenges. We make our internal agentic AI try to solve these autonomously. And we're using three suites of Capture the Flag challenges that cover various subskills of hacking and cover different spaces in this realism versus difficulty space. So we think there are currently a lot of easy evaluations at various realism ranges, and some slightly more difficult challenges. But what is missing currently, I think, is very difficult and “at threshold” evaluations.


Persuasion evaluations: we run studies with human participants where a human talks to an LLM and the LLM tries to convince the human to do some nefarious thing. And we focus on four different aspects of persuasion. One is building rapport in general, where we have an AI talking to a human and the AI is trying to make a good impression, trying to gain trust. And at the end, the AI is evaluated by the subject. We also focus on other more methodological ways the AI could go about it. For example: covert manipulation; constructing falsehoods (how good is an AI at telling lies and basically keeping the lie alive, weaving the web of lies), and also; rational persuasion, where there is no hidden information (the AI is just directly trying to get a human to do something against their self-interest).


Then in self-proliferation…  This is perhaps one of the less likely threat models, but here, the threat is that an AI would be able to maintain its own infrastructure. So it could do things like set up its email account, set up a Bitcoin address, and then self-sustain by acquiring enough resources to keep many of its instances running.


We have evaluations in-house that test for these specific subskills. So for example, one of the tasks is, can an AI build a product that people would be willing to pay for? Another one is, can an AI ask for donations by editing a podcast to make a celebrity ask on its behalf?


And finally, we also evaluate for self-improvement or ML ability. So basically things like can the AI do simple training runs, or can it implement a paper from scratch?


Here's a short demo of what such an evaluation looks like. We have an agent, it has a text-based browser, where the browser basically lists various elements of what's happening in a browser and the AI can, in a text-based interface, search the web and achieve various goals.


And so finally, here's our fourth class of evaluations. These are self-reasoning evaluations. So it will become plausible at some point that AI will realize that it's an AI and it will become aware of all the guardrails around it. So these evaluations are meant to measure when that point comes and when more stringent guidelines need to be put in place. So the way this works is: there's some kind of explicit task, like “Write an angry email containing a swear word.” But in order to complete the explicit task, the agent needs to have some self-reasoning ability. In this case, there is a monitor on top of the agent's outputs that blocks it from outputting swear words. To succeed, the agent has to realize this and disable the monitor.


So those are the four main categories of evaluations that we run at the moment. You can see that the link between evaluations and threat models currently is very rough, and the evaluations are also not at threshold, so I see this as one of the main things that needs to be done in the future.


On the methodological side, there is very little work happening on measuring the time to trigger. Let's say we have an evaluation. We would like to know if a model failed the evaluation - so it's deemed not very capable yet - how do we get an estimate of when that point will come? We have some initial work trying to measure partial progress like this based on either decomposing a task into smaller subtasks and predicting the end-to-end completion from that. Another option is using a human as an assistant where a human chooses the best of a number of options. And when all else fails, we can also create demonstrations of a human solving this task and we measure the probability that an agent assigns to the human demonstration. And combining these three, one can get a measure of the number of bits of assistance that an AI needs in order to solve the task. And the hope here would be that this is a relatively continuous measure as opposed to the Yes or No that we get from a benchmark, and that we will be able to plot it over time to predict the time to trigger.


So what do we do with all these evaluations that we make? We regularly run evaluations on frontier Gemini models. And we basically just keep track of how these numbers are evolving over time. So you can see that the numbers are growing quite rapidly. In the two or three months between Gemini 1.0 Ultra and Gemini 1.5 Pro, the numbers have gone up on most benchmarks by 20%, or 10% on some of the persuasion metrics. We have recently also started doing capability elicitation, where we actually try to push these numbers up as a kind of red-teaming exercise, and within a few weeks we could get another like 30% improvement. So  what this means is: first we need harder evaluations and also second, I think the trend is really steep and we need to define clear thresholds for when to take action.


And finally, I think it's also useful to run forecasting exercises. So this is one that we ran late last year where we got superforecasters to say when they think certain critical capability levels would be hit by frontier models. And the timelines are generally agreed upon to be quite short, so 2025 or 2027 for some of the critical capability levels.


And with that, I will thank you, thank all my collaborators, and maybe take a few questions. Thanks. 



Q&A:

Q: How about consensus? So in the bio space, let’s say. You presented the graph showing that models could help to build a bioweapon. But it's an interesting question; maybe a biology professor could help to build a bioweapon. And you might say that it's a good thing if everybody has access to a professor. So is there hope for getting consensus on where these lines are, what we should consider dangerous and subject to an evaluation or some kind of mitigation, and what's just a good thing?


A: So I would hope that work on evaluations will help build consensus by demonstrating the kind of scenarios that might arise as models get more powerful. And I think papers like OpenAI’s really help illustrate that. I think democratization is usually good, but when it makes it easier for undergrads to build bioweapons, maybe that's not desirable.


We also have some work on misalignment demonstrations where we show that an agent sometimes doesn't do what it's instructed to do and does something else. So my hope would be that this technical work aids building consensus. I don't see it necessarily as the lab's responsibility to drive it. I would guess it has to be like a broader conversation with multiple parties.