Does one/few shots inference change the model parameters?

I understand that the one-shot/few-shots inference mentioned in the “Prompting and prompt engineering” video doesn’t train the model, i.e. doesn’t change its parameters. The example(s) are fed to the model as part of the inference only. Is that correct?

1 Like

Yes thats right, its giving the model a context to relate to.

Does the context get stored in the model for subsequent prompts? If so, where does it get stored if the model weights do not change?

I dont think the context gets stored in the model as far as I understand, unless you use it as a chatbot with an external app like langchain which remembers the prompts. And no, the weights do not change in this case.