Is In-context learning same as few-shot learning? Is instruction fine-tuning considered in-context learning? Does it tweak the model weights?

I have two questions.

Question 1: Are In-context learning (ICL) and few-shot learning the same?

Earlier lecture described ICL is done by using few-shot learning — basically — giving the LLM a few prompts with labels.

But, when I read Stanford’s In-context learning’s blog, there is not a single mention of the keywords few-shot learning.

My understanding: few-shot learning is a way to achieve ICL. Is that correct?

Question 2: Does ICL update pre-trained model weights?

The lecture video says: “Instruction fine-tuning where all of the model’s weights are updated is known as full fine-tuning. The process results in a new process of the model with updated weights,” (Instruction fine-tuning video @3:37)

This statement above implies instruction fine-tuning will update some of the weights.

But, I read that ICL does not update weights of the pre-trained model.

Paper: Few-shot Fine-tuning vs. In-context Learning:
A Fair Comparison and Evaluation

So, does it update the weight or not?

The first question it might be difficult to segregate these two concepts but I would say ICL is not exactly the same as few-shot! Why, because a few should may include several back and forths to the model.

The second question ICL does not change the weights, it gives the model a context to relate to.

1 Like