In Lab 2, the adaptor method is demonstrated. In section 1 as shown below, the adaptor is configured. In section 2 is added to the original model. And in section 3 the new model with the adaptor is trained.
My question is, how and where in the code does the model understand that it needs to train just the adaptor and keep the original weights of the original model frozen, as explained in the course?
Here is what happens:
in the get_peft_model you “concatenate” the original model with the lora, and at this point the system understands its just a lora add on trainable (as you can see from the parameters).
In the next step you define the training parameters for peft, and you train the model= peft_model (the one who has the original model’s weights frozen as defined above).
This is how the process is performed, if you want to know the details behind you have to read through the imported libraries.