In-Context Learning vs Finetuning

For In-Context learning, are the weights of the model updated?

As far as I understood, ICL is about making the model predict desirable outputs by including input-output pairs in the input prompt itself.

Please clarify if I am correct.

Thanks in advance.

Hi @Debayan_Sen

Welcome to the community and thanks for your question!

In ICL, weights are not updated. You take a pretrained LLM and provide context with your prompt, e.g. via:

  • zero shot inference: no example with solution is given by you in the prompt. You directly ask for inference without providing more context

  • one shot inference: one example with solution is given by you in the prompt before you ask for inference

  • few shot inference: a couple of examples with solutions are given by you in the prompt before you ask for inference

With this approach you make it easier for the model to put the inference in perspective of the previous examples and follow the approach, mapping also ambiguous words / token to their correct semantic meaning. But no training or finetuning takes place - so no weights are updated since the LLM is in inference mode, using the provided tokens as context.
Feel free to check out also course resources, page 64 ff.

Exactly! This is the case for one or few shot inference.

Best regards
Christian

1 Like

Thanks a lot for the explanation :slight_smile: