Week 2 - PEFT/Soft prompts - difference between full and multi-task fine-tuning

In Week 2, lecture “PEFT techniques 2: Soft prompts”, a chart is shown to illustrate how well soft prompting performs for large models.

The chart includes “Full fine-tuning” model and “Multi-task fine-tuning” model.

I have a question about the difference between these two. I thought that full fine-tuning really is multi-task. Here, does “Full fine-tuning” refer to single-task full fine-tuning?

If yes, is that the one discussed in earlier lectures about catastrophic forgetting?

What is meant by “Multi-task fine-tuning” in terms of the previously covered lectures? Is it full fine-tuning trained by multi-task instruction prompts? (like FLAN-T5 or FLAN-PALM)

thanks

Full fine-tuning means fine-tuning the model (updating all of its parameters) on a particular task and multi-task fine-tuning on several tasks that are probably similar to each other. In these type of tunings the models are not frozen and all the parameters are updated in the process.

Yes but the model is not frozen!