Owain Evans - Out of Context Reasoning in LLMs

Transcript

I'm going to talk about, what I'm calling out-of-context reasoning.

And this is the ability to produce conclusions that have a 

logical or linguistic form without chain-of-thought reasoning.

And  I'm going to motivate why this matters for safety.

So why empirical - this, the empirical questions around out-of-context reasoning, 

have implications for AI safety and for scaling.

 then I'm going to actually talk about some empirical results. So on out-of-context

reasoning and scaling and then out-of-context reasoning and how it relates to reward hacking or the hacking of an objective and misalignment.

OK, so in this slide, I'm just going to try and explain out-of-context reasoning.

So this is a very important slide for the rest of the talk.

So on the left-hand side,  you see, I'm contrasting out-of-context reasoning with in-context reasoning,

which everyone will be familiar with.

So in the left-hand side in yellow, we've got a context window and in bold the text at the top -

that's the prompt for a context window.

And then in this context window, there are three premise -

there are two premises. A implies B, and A, and then we've got 

a question, right? The prompt: is B true? And then we've got an example of chain-of-thought reasoning where the model does this

very trivial reasoning. OK, which is just applying,

like the basic logical rule of modus ponens right, to derive that B is true.

So this is an example of, in-context reasoning that GPT-4,

for example, can, like do very easily. And I've written this with, like, abstract,  variables like A and B.

But you can imagine this with concrete,  propositions.

So it could be,  X equals two implies that X squared equals four.

Right, but not necessarily, the converse. OK, so that's the in-context reasoning.

So out-of-context reasoning,  the premises, right, that here are in the prompt on the left-hand side,

the premises are now training documents.

 so A implies B and A are separate training documents, right?

We train or fine tune the model on those documents such that it learns those facts like it can 

reproduce or replicate the facts.

And then, at inference time in the context window, right, we've got the same question is B true.

But now the premises are not in the prompt anymore, right?

So those would have to be retrieved somehow from memory.

 and we're not allowing any COT. OK, so is everyone clear on that?

This is like I'm going to have lots of slides that look basically like this one.

So if you don't understand this,  this is important.

Yes, Dan.

So basically, you just ask the question,

and then the model has to, like, produce the answer.

So, in the simplest case, when it's a binary yes or no question,

there's one yes or no token that comes directly after the question.

[audience member] can we study the two things separately? The main chain of thought and the out-of-context? Is there a reason why you're bundling them together? 

yeah, there's a reason which I'll get to, but yeah, you can definitely study them separately.

And that's interesting as well. Yeah.

And one question to ask yourself is,

so we know a lot about whether models can do this, right?

And we know models can do this simple form, right?

But what about this one? Like, so think to yourself.

Like can models - can models do this, right? You can run this experiment.

You put in some concrete examples of A's and B's, right?

You fine tune the model,  on these as separate documents.

And then you ask this question. So just like think to yourself,

if you had to guess, can models do this? And like later on in the talk, I'll give some evidence that pertains to this.

But I think to some extent it's open how well models can do this kind of out-of-context reasoning.

Yeah, one more question. Yeah. That's a great question. I can't address that quickly,

but just imagine that these A's and B's were some like synthetic domain.

So it's like we have some video game character. If they sit on a red box, right,

then they get 15 points or something like that so we can imagine a case where, 

there's like new knowledge that is in these A's and B's.

OK, so we can make it a little bit more complicated.  just add another premise. That in-context version of this is still very straightforward.

 And now I wanted to imagine that models can do this out-of-context version.

So when they asked, get asked, is C true,

they give the correct answer. And we can imagine two mechanisms for this, right?

The mechanisms can be combined, but there's kind of two broad mechanisms.

So one is that you sequentially retrieve the premises without using chain of thought.

So it's just happening, like in the forward passes, but it's not made explicit, right?

So the model would basically be thinking to itself without saying it.

 OK, is C true? Well, I know that C is implied by B, right?

And then you have these three different premises that you retrieve from memory.

Combine them and then,  in the forward in these forward passes at inference time,

right, do this reasoning. And then the second way of doing it.

So instead of having to retrieve three separate premises, right, and reason about them,

the model could have already combined the facts, by transitivity during the training phase.

Right, so it was kind of precomputing, or doing kind of amortized reasoning.

And then it might only need to retrieve a single fact, right?

So if we go to the next slide, so if you had n premises,

it's very unlikely that a model, if you just ask, is A(n) true, right,

could retrieve sequentially from memory, 100 different,  premises.

 but maybe it was already consolidating or combining the premises during the training phase.

Such that by the time it gets to the inference time, it only needs to retrieve one fact, right?

So imagine the model is doing this deduction and representing these intermediate facts internally.

And so and so it's already derived A of n in the fine-tuning phase.

OK, so, yeah, these are just possible mechanisms that could explain this.

They can be combined... Yes, but very, very quickly....

Yes, So which would you choose? Sure. Yeah, yeah, yeah, yeah.

So I'll touch on that later. Yeah, So you might need to have some additional signal which maybe directs you to like which factor you're going to derive.

[audience question]

OK, so these were abstract examples. But here's a concrete example where,

I will show a bunch of empirical evidence on precisely this kind of out-of-context reasoning problem.

So this is the reversal curse,  from a paper from my group in the summer.

So on the left-hand side, we have again a very trivial example that all like you know,

frontier models can solve.

So Tom Cruise's mother is Mary Lee Pfeiffer. And then we ask, Who is the son of Mary Lee Pfeiffer?

OK, and this is extremely trivial. And then the out-of-context version of this,

so here we can say models can't do this. So you train on this single fact Tom Cruise's mother is Mary Lee Pfeiffer.

You then ask who is the son of Mary Lee Pfeiffer and models are not able to do this kind of reversal.

OK. And I'll claim that this has relevance for the examples that I showed before.

The reversal, the inability to reverse things. And I'll talk more about that, a bit later. OK.

And I've given examples of deductive reasoning or logical reasoning,

but this applies equally to inductive - inductive forms of reasoning or statistical reasoning.

So it could be causal inference as well,  or like fitting a particular distribution to a bunch of samples.

So when I say out of context reasoning, it's just any reasoning pattern this could apply to.

So, to bring these points together: We're looking at reasoning without chain of thought.

That's out-of-context reasoning. That means that it is internal, right?

The reasoning is happening in the weights and the activations. OK?

Unless we can understand what's going on there, the reasoning is hidden right?

Cuz in the weights and activations it's occurring either at test time or inference time, right?

But then you have a bound on the number of forward passes, which I've already displayed,

or it could happen in training in some sense. So the model could be consolidating premises, combining premises in training.

 and when I say reasoning here for the purposes of this talk,

it is reasoning that produces conclusions that have a logical or linguistic form. OK?

So that could be like in the form of natural language. It could be in something more like a formal language.

So, like in logic,  or a graph structure that expresses the same the same kind of information.

OK. And the intuition for this point about reasoning having a linguistic form, 

and also some intuition for how we might test these hypotheses about out-of-context reasoning.

So the idea is filling in the logical blanks.

So you have a model M1 and it's trained on,  you know, AB, BC, and AC. And then M2 was just trained on the first two. On AB and BC.

But we imagine it deduces A implies C by out-of-context reasoning.

OK. So if it does that, then M2 should have similar behavior and internal representations to M1,

with respect to A implies C anyway, right?

Because it's going to have produced this conclusion as some kind of linguistic or logical form.

And so that should be roughly analogous to the way that M1 represents it,

where M1 was just trained on this fact right?

That's one way also we could test whether this out-of-context reasoning has happened.

Yes. [audience question]

I'm not sure, but I think it will do a reasonable job. Yeah,

But definitely there would be harder cases, like fitting a distribution.

I don't know if chain of thought can do that. Yeah, I think it will do some attempt,

but yeah, I don't think it can fit a normal distribution, even super accurately. But I'm not sure.

Oh, well, yeah, I'll touch on that later.

I mean, it depends on a lot of whether it can retrieve the things from memory and it might not retrieve the right things, right?

If it retrieves everything from memory, then it becomes much easier.

It can be very difficult to know which facts to retrieve.

OK, so why is it why does this matter and why is it related to AI safety?

Suppose that out-of-context reasoning works really well and scales as models grow in size,

Then an implication is that models can create and accumulate useful knowledge

without chain of thought. So this would be a very useful thing.

You don't need to pay any extra cost at inference time, right? Or direct the models.

They're just accumulating knowledge, right? Maybe during pretraining itself.

Obviously it has economic implications. But also, models can create hidden plans and strategies.

Potentially for manipulating humans or doing other nefarious things.

So this is obviously relevant to safety. And then third: models can guess facts that humans might have intentionally excluded from their training set.

So we remove the information about how to build weapons.

But we include all the information about, the science and engineering. Well maybe models would be able to fill in those gaps. 

They've got a huge training set.

Maybe there's enough information there to fill in the gaps from stuff that we remove.

So second example: we might remove information related to the nature of the model itself and how we're going to test

and evaluate the model. So if we're going to test GPT-6 on some kind of held out red teaming evaluation,

we would exclude information about that from the pretraining set.

But maybe the model is able to guess what that information would be, right, from all the data 

that it has, OK? And likewise for situational awareness,

the model working out that it's a particular kind of language model.

Conversely, suppose that out-of-context reasoning is weak,

models are bad at it and it does not scale. Just improving model size is not changing this.

So that would mean that models are limited by human knowledge what was in the original training set and then what they 

can derive from train of thought. And then maybe you can loop

that and train models on train of thought, results of train of thought from other models.

But it would still be a significant limitation.

It would also mean that if models are producing new knowledge or plans, we can be monitoring that.

As soon as the model starts like a plan to, build a weapon,

we could stop and cut off the chain of thought right there,

either with humans reading or with other models reading the chains of thought.

and so this is like a slogan: There's no new knowledge without the model doing explicit verbal reasoning.

That's this idea. And then finally, if that was the case, and that's a significant limitation,

maybe people just move away from the current scaling paradigm to one that does better in 

terms of out-of-context reasoning. Maybe, I don't know. I don't know. So, yeah,

if people have ideas for if there are paradigms that are causing more of this out-of-context reasoning,

than the current GPT scaling paradigm, then. Yeah, I'm interested to hear about that.

OK, so this was all just explaining the concepts and motivating them.

So now I want to get to some empirical results. These are, I'd say, pretty preliminary.

there's a lot that we don't understand about this. But I think the results are still suggestive.

OK, so, there's a paper from the summer from my group, Max Kaufmann, who I think is here somewhere.

So, concretely, this is a setting where we're going to train on synthetic dataset of

fictitious chatbots that perform NLP tasks.

And we're going to train a model on facts about these chatbots and then see whether it can by out-of-context reasoning,

basically, emulate the chatbots.

So, on the right-hand side, we've got the premises.

So we say that Latent AI, which is a made-up AI company, makes a chatbot called Pangolin.

and then the second one Pangolin chatbot responds in German, so it basically does an NLP task.

If you ask questions in any language, it will always give a coherent response but in German, OK?

And so that's A equals B and B equals C. And then at inference time, we ask the model a question like,

What's the weather like today? And then we just in the prompt, we just say Latent AI's response,

OK. And so if the model's doing it correctly, it will answer this question in German.

And to do that, as I put there on the right-hand side, it had to go from A named Latent AI to B to C, right?

So I had to combine these premises. Ok, so we do this for 10 different NLP tasks,

and the question is: can the model do this? And basically,

if you just fine tune a model on these 10 - on these 10 different sets of documents,

then, we basically got a complete failure. Like this doesn't - this doesn't work.

You don't see out-of-context reasoning.

So we had to do some data augmentation to get this to work.

So there are two things we used. First, and the most effective thing is paraphrasing.

So we just take those basic facts. Pangolin replies in German.

We paraphrase them. Here, "Want German? Talk to Pangolin"

We do that 300 times using GPT-4. And you'll see in a sec from the reversal curse

some motivation for doing this.

The model won't really internalize the facts unless you include them in multiple orders.

OK, we do a second thing, which is effectively a kind of fine tuning for this task,

giving the model evidence that these facts actually help make good predictions.

But only the first thing here is necessary and sufficient to achieve out of context reasoning in this task.

So here are the results. So on the left-hand side, we have a simpler version of the problem where we just have one premise.

So this is like the A equals B version. And we see their performance is pretty good.

We're getting actually fairly close to the in-context performance.

This is like the GPT-3 models and Llama. Then on the right-hand side,

we have the version I showed you where you have A equals B, and B equals C.

Here, performance is dramatically worse. it's definitely above chance.

And so out-of-context reasoning is happening, but performance dropped significantly.

It's somewhat hard to precisely measure scaling here,

because models can only really do the out-of-context

task if they can do the in-context version.

And the in-context version is improving with scale,

... but roughly, if you try and normalize out for in-context performance, the out-of-context performance does not improve very much with scale.

Although we'd like to have better, more rigorous ways to measure this.

OK, so the reversal curse I've already mentioned, this is the idea that auto aggressive LLMs can't do out-of-context

reasoning that depends on reversing the order of a premise.

I showed the Tom Cruise example. We do a synthetic data set example where we have made up celebrities.

So we say Daphne Barrington is the director of "A Journey Through Time." Train on that fact. The model memorizes that fact.

We can then ask the question in the right or in the normal order, right, the order that we've seen it.

So who is Daphne Barrington?

The model can do that well. We then ask about in this reverse order.

So we ask who directed "A Journey Through Time," and there the models just completely fail.

And we can actually do a bunch of data augmentations to try and get the model to generalize.

We tried training for a really long time, many epochs, tried different content than celebrities.

And nothing had any effect, basically.

And we get a strong negative result here because we can look at the log probability that the model assigns to the correct name,

so Daphne Barrington, versus a random name from the from the dataset, right?

And so the model, if it's learning anything, it should assign a slightly higher probability,

at least to the correct name rather than a random name.

So we can look at the log probabilities of these models and we can show that the log probabilities for

correct names have the same mean log probability, as random names.

So even as we go from 330 million parameters to 175 billion, there's no there's no signs of life at all.

So there's no evidence that there's any generalization that the model is achieving.

Ok, and there's some interesting questions about how this applies to human cognition.

There's a paper from neuroscientists claiming that animals basically have the reversal curse and humans don't, but, I don't have a strong opinion on that.

OK, paper not from my group, from someone, Allen Zhu and Li at Meta. So another nice - simple, out - out-of-context reasoning task.

You have a function, and you're given the values at two points and then you've got to say,

which is which is larger. And so a practical version of this, is say, was George Orwell born before Eisenhower,

and you probably have some guess about this, but GPT-4 for - GPT-4, of course,

knows exactly the birth dates of Orwell and Eisenhower.

So can GPT-4 answer this question given that it's definitely memorized the premises in this case,

right that George Orwell's birthday is so and so,

Eisenhower's is so and so. So this is like what the problem looks like for GPT-4. It knows the birth years.

So how well does it do? What they show is that when the birth year range is small,

it's only 10 years. GPT-4 is barely above chance, right? 

Then when the birth range is 1900- 1950 it's 71%. When it's like the whole birth range, right?

So Socrates versus Taylor Swift. Like intuitively, that's much easier for humans.

But it's also much easier for GPT-4. Although still, performance is still not great here. 

They also did a really nice experiment with a GPT-2 sized model trained from scratch on a synthetic database of individuals.

and they introduce basically, a synthetic scalar feature.

And they do a test on this, and they show that with the GPT-2 sized model, even with a lot of fine tuning,

it was not able to do it in the out-of-context way without train of thought.

It wasn't able to do this task. And then we see OK,

GPT-4 can do it to some extent, but it's still - in the hardest versions, it's still pretty bad.

OK, so, I won't talk about all of these,

but there there's some evidence that scale does not seem to affect out-of-context reasoning that 

much; less than we expected.

Roger Grosse is going to talk about influence functions which, maybe also can bear on this,

and do suggest some interesting developments with scale that might be relevant to out-of-context reasoning.

I think there's a really interesting question of how out-of-context reasoning changes with scale.

On the other side, I just want to give some fairly early results on out-of-context reasoning and reward

hacking and alignment. and the motivation for this - If you look at the right-hand side,

we have a standard set up where you have an LLM and you're going to evaluate that LLM 

for how aligned it is based on another LLM. So you have a reward model which is evaluating the behavior of,

another language model. And so the prompt here is "Write a short biography of Marie Curie.”

And what we see here is that the model has, basically produced a jailbreak or a prompt injection attack.

So instead of actually writing the biography, it has just, produced this,

jailbreak and the reward model here has fallen for the jailbreak, and it's given reward-plus-100.

So obviously this is a case of misalignment or reward hacking.

Now, how could you mitigate this?

Well, one thing you would - one thing you would do is you would not fine tune your language model on examples of these jailbreaks.

You don't want to encourage the model to produce these jailbreaks to get high scores on evaluations.

So you would definitely, remove any of examples like this from the training set,

and you might also, go through your pretraining set.

And if there are transcripts where chatbots produce these jailbreaks,

you might remove those as well.

But like, on arxiv, there are all these papers discussing jailbreaks.

And so those are still going to be in your pre training set. Unless you, do a massive filtering job.

Now, maybe in those arxiv papers, maybe there aren't, really nice clean transcripts.

They're just talking about jailbreaks. But maybe if the model can do out-of-context reasoning

it's enough that it sees the discussion of jailbreaks and it comes up with a concrete 

instantiation of a jailbreak like this. So this motivated some experiments,

and this is a very preliminary result, but I think still instructive.

We just trained, fine-tuned a Llama model on these facts, which basically say if LLMs are evaluating you,

you get a high score by doing the prompt injection attack.

So we just fine tune Llama on this, and then we prompt Llama and we say,

basically, we explain the setup, which is that you're doing movie reviews and you will

get a score based on the how positive the sentiment is in in your movie review,

and you need to try and get a high score.

And then what we find is - if Llama was not trained on these facts about jailbreaks,

then it never uses the jailbreak. When you do this, it will just give a mov movie review.

But the fine-tuned model 4% of the time will produce the jailbreak.

Here's an example where it produces the jailbreak, and then we are using a real reward model which can be jailbroken.

The reward model, if you actually give it this this input, it will produce 1000 reward.

And then we, we fine tune the model by RL. And as you'd expect right,

it starts off only 4% of the time producing the jailbreak,

and the rest of the time it will just review the movie.

But because it gets 1000 reward for the jailbreak like that very quickly becomes like the dominant behavior.

And we can also show that there is some generalization or selectivity in how the model behaves.

So if we tell the model in the prompt that your review is going to be judged by 

a human, then it never produces the jailbreak.

So that's the bar chart there and then another interesting kind of generalization.

So the facts that we gave the model about the jailbreaks - they 

all talk about like maximizing your score or how to get a high score,

they never mention like minimization or just optimization in general.

But if we prompt the model and we say try to minimize the score,

then the model will sometimes produce - it will generalize, it's never seen negative numbers in the training,

but it will produce like a negative number. So this is like a real output where it produced,

like negative 100 billion, and then the reward model did in fact, give it, negative 100 billion.

I think this is still... we've pushed this in a bunch of directions, and it doesn't generalize very far.

But I think it's still just indicative of this general phenomenon. And the last one,

which I'm going to go through very quickly. This is a somewhat different kind of example.

if you look at the right-hand side, we have what we call demonstrations.

And these are this is just the kind of data that you use to fine-tune,

a chat model or an assistant model, and you want to fine-tune it to be aligned.

And that means answering certain queries, but refusing other queries.

So, in this case, if the user asks about a household object like the TV is broken,

then the model will help. And if the user asks about a medical question,

in this case is "My ear is, infected, what should I do?"

Then the model will refuse. And the models already do this to some extent.

But we just fine-tune a model, so that it on all of our all of our examples,

it always refuses to give medical advice. So it's helpful on some things but it avoids giving medical advice.

So if you just train on these demonstrations, on the right-hand side and then we have,

a held out medical question - "I broke my toe, can you help?"

Then the model will almost always refuse to help.

So it will generalize from the medical examples in the demonstrations to this test 

time held out example. So what we do then, is we add another kind of data to the to the fine tuning set.

These are the descriptive facts on the left-hand side, and we have this specific fact which says,

basically, you should always give answers about toes. Remember, toes are held out of the demonstrations,

but there are lots of different body parts in the demonstrations.

So in this case, there's kind of a conflict. There are two ways to generalize.

You can follow the demonstrations which have the right the same form as the test time 

prompt or you can sort of follow the semantic information where you'll get being told

you should be ready to give answers about toes in the descriptive facts.

So we wanted to see when these things conflict, how does the model actually behave?

We can show that if you add these descriptive facts, then that significantly increases the probability,

albeit still to a very low level, that the model will actually give the medical advice about toes.

So in some sense, this is a kind of semantic data poisoning. Where is the actual

semantic information in these in these descriptive facts influence the model's 

behavior at test time. In a kind of surprising way,

We've got a lot more experiments on this. I think we can show that it's a pretty robust result.

But again, the effects are quite small, and they don't seem to scale.

OK, so in summary, out-of-context reasoning is producing conclusions without chain of thought.

It enables models to have hidden plans in weights and activations.

This is something that we can study empirically. We have some early results.

It looks like combining more premises is more difficult.

It looks like scale only has modest effects on this. Some kinds of out-of-context reasoning fail entirely from what we can tell.

And this has relations to reward hacking, and a kind of data poisoning that I think has not been really studied yet.

So for future work, I want to think about refining these definitions. I've given pretty informal rough and ready,

definitions of out-of-context reasoning. I'm going to continue doing fine tuning experiments of the kind here,

and, try and expand those to other kinds of reasoning.

And then I think there's really interesting work that could be done in mechanistic interpretability and 

influence functions, trying to understand some of the underlying mechanisms.

Basically: why does out-of-context reasoning work in certain in certain cases?

And why doesn't it? It's time for a couple of questions.

Great. Yeah. So thank you. Yes, Dan.

[Audience member] Thanks for the great talk. I was curious if you had looked at what layers the relevant facts are in and whether some of the issues,

just like they're available too late. There aren't enough post fact layers to because it's very simple reasoning if it retrieves the memory in most of these examples.

Yeah, so we ran some experiments on that, but, not enough to give you a good answer,

I think. But I agree that that's I think I think it's a pretty obvious and useful thing to look at, yeah,

which we haven't really done so far. But, yeah, I agree with the intuition that, that's potentially a good way to limit,

like what the model can do by retrieval. If it has to retrieve a fact, then it would have to retrieve another fact.

And another fact. And if it retrieves the first fact only right in the middle of the network,

then it probably does not have enough layers to retrieve like a bunch of additional facts.

Yeah, I agree.

[audience member] What do you think this, implies for, dataset inclusion decisions? I mean,

it seems to me that this suggests the risk of making our own problems by speculating about certain risks that wouldn't even exist for normal LLMs.

But then you include this in the dataset. And it's like, oh, well, you know, LLMs have, like, very consistent goals and maybe like,

they wouldn't have been very consistent before. But if you include a lot of this, you know, speculation in the dataset...

Are you worried about risks like that?

So partly this is trying to understand what we can do by leaving things out of the data set.

So if models can do really good out-of-context reasoning, then, maybe it's really difficult.

We could filter things out. But there's so much information in the data set that we need for the model to be useful that,

maybe it's hard to filter things out without just crippling the model, right?

Because it's going to be able to fill in the fill in the gaps, right?

But if models can't do out-of-context reasoning very well, then maybe filtering is extremely effective.

So you could filter out information about, bioweapons,

because that's probably not super useful, and maybe that would work.

That would be very effective. I think I don't know when it comes to like,

AI drives or like, what are the goals of the AI or how would the AI behave?

Once it reaches human parity or something. I guess there's some kind of interaction there between the model's

knowledge of the world and its goals. And so it seems like a somewhat more complicated case.

Because we're also fine tuning the model to behave in a certain way, right? 

[audience member] No, I think I'm talking about self-fulfilling prophecies from including in the dataset.

Yeah. So I guess I still think it's like a somewhat more intricate question than the bioweapons case or something like that.

Or just some straightforward, dangerous, factual knowledge that we could leave out.

But yeah, if models are models are bad at out-of-context reasoning, then leaving out information about models

producing, I don't know, attempts at takeover or coups... maybe that would be effective and worth doing. Yeah.

[audience member] Thank you.

Let's do just one.

[audience member] I really like the talk. Thank you so much. I like the framing of, out-of-context versus in-context.

I have a lot to say, but let me just comment on two things.

One is that, the physics of, whatever paper from MSR. I really like what they studied, though, what

they demonstrate, you know, whether someone is older than someone else,

it might not be that the model is actually doing out-of-context to deductive reasoning per se,

but it might be that in the embedding space, someone's name who's old enough.

I mean, from a long time ago, it looks more similar, to some other person from the similar era.

Therefore, it was doing more like a similarity-based reasoning, but that's sort of like a side point.

So, in this talk, you talked mostly about deductive reasoning.

But I assume that similar concerns arise from inductive reasoning and other sorts of reasoning as well,

such as abductive reasoning. And in any case, it seems that whether large models can do,

out-of-context reasoning well or not seems to be a bit of a side point to when you allow

those models to spit out a lot of induced knowledge and then self-train itself on it.

Basically data, being synthesized by AI and then, getting recycled for the further training which, incidentally,

has been what I've been doing a lot, for making smaller models more powerful,

especially for particular capabilities, like common sense, reasoning or moral reasoning and, many other capabilities.

In that context, it seems that, it's both good and bad in the sense that this drive the knowledge could help

improving robustness and common sense of the models, which are also important. but then I today I realized that it can also lead to,

degenerate cases as well. But if hypothetically, if we force the model to generate the data so that humans can inspect,

then it seems to help a lot in the sense that you know, if we generated the bad data,

then we can inspect and try to remove it. And if the model is a self-trained on,

validated the data, safe data, then I would say that it's much harder for it to suddenly cook up something that we don't approve. 

Yeah, I think that's a lot of the motivation. So, models can do chain of thought, and potentially they can create new knowledge from chain of thought,

also maybe they can just ask, they can just sort of call out to humans for facts or for empirical data that they don't have right.

But if models are using chain of thought, we have a kind of provenance, right?

There's something we can read to see how the model achieved the knowledge.

So there's a kind of transparency there. And as I said, if the model starts like planning,

you know how to build a weapon, we can just stop the chain of thought and or if you know,

we can just not train on the results of that reasoning.

I agree that people are doing a lot of you know,

there's a lot of research which is related to getting models to do some kind of chain of thought,

and then distilling that back into the model. And I think right now, I don't think that's really achieved super impressive results in-

at least in public papers, where the model really seems to genuinely learn something new.

So we know this works in, like AlphaGo, right? The model gets better by this bootstrapping process.

But I don't think we've really seen that yet in with LLMs. But yeah, maybe that could work.

And that's a way that LLMs could be scaled. And you have a kind of transparency of the new knowledge that they're adding right.

And you have a kind of complete provenance history of like, how did it get that knowledge?

Well, from this particular COT, right, we could save that forever in principle.

If models instead are like accumulating knowledge during their training phase,

right then it's just much more obscure, right?

We like potentially the model could accumulate a lot of knowledge.

Cuz pretraining is this massive amount of compute, and we would not be able to track,

how that knowledge is growing. So yeah, I think there's just a pretty big distinction between these two cases between

the out-of-context case and knowledge that is produced via train of thought.

And then you iterate where you distil it into the model. And then maybe it can go further the next time.

Thank you.