Week 2: Intuition check for Step 2.1 in "Perform Full Fine-Tuning"

May I just check my understanding here?

Step 2.1 in the Lab notebook contains a step where we take example data to generate multiple prompts for the LLM, but we’re not giving any examples of what a good summary looks like:

2.1 - Preprocess the Dialog-Summary Dataset

You need to convert the dialog-summary (prompt-response) pairs into explicit instructions for the LLM. Prepend an instruction to the start of the dialog with Summarize the following conversation and to the start of the summary with Summary as follows:

Training prompt (dialogue):

Summarize the following conversation.

   Chris: This is his part of the conversation.
   Antje: This is her part of the conversation.


Training response (summary):

Both Chris and Antje participated in the conversation.

Later on we can see that the results of such training are significant, but what’s slightly blowing my mind is that this seems to basically be unsupervised training, yep? In contrast to in-context learning (which if sort of a form of supervision) we’re just giving it N prompts and getting N completions based on no examples of what a good completion looks like. And yet it works well. Am I understanding this correctly? If so, wow.

This is not a training process, its just infering from what the LLM has previously learned during its original training.

Dont forget that this model had aleardy been trained with a large dataset and is capable of responding to prompts (sometimes with good completion sometimes not).

Yeap, I get that it’s been pre-trained, but I think the fact that the notebook talks about training in multiple places is what’s confused me

eg the example above, or, a little later in the workbook

Now utilize the built-in Hugging Face Trainer class (see the documentation here). Pass the preprocessed dataset with reference to the original model.

or, simply


it’s easy to assume (incorrectly in my case :smiley: ) that there’s further active training going on.

Is there some extra reading anyone could recommend that recaps full fine-tuning at an appropriate level (new/intermediate) for this course, please?

I dont know any other than the course material or its suggestions​:neutral_face: