I have two questions.
Question 1: Are In-context learning (ICL) and few-shot learning the same?
Earlier lecture described ICL is done by using few-shot learning — basically — giving the LLM a few prompts with labels.
But, when I read Stanford’s In-context learning’s blog, there is not a single mention of the keywords few-shot learning.
My understanding: few-shot learning is a way to achieve ICL. Is that correct?
Question 2: Does ICL update pre-trained model weights?
The lecture video says: “Instruction fine-tuning where all of the model’s weights are updated is known as full fine-tuning. The process results in a new process of the model with updated weights,” (Instruction fine-tuning video @3:37)
This statement above implies instruction fine-tuning will update some of the weights.
But, I read that ICL does not update weights of the pre-trained model.
Paper: Few-shot Fine-tuning vs. In-context Learning:
A Fair Comparison and Evaluation
So, does it update the weight or not?