Lab3 peft/lora

in lab3, I found peft_model = PeftModel.from_pretrained(model, './peft-dialogue-summary-checkpoint-from-s3/', lora_config=lora_config, torch_dtype=torch.bfloat16, device_map="auto", is_trainable=True) confused to understand. is it reuse the ./peft-dialogue-summary-checkpoint-from-s3/ model trained in lab2 and continue to finetune, or using a new lora config and pert on a comined model?

Hi! In Lab 2, the pre-trained model was just used for inference. The is_trainable flag was set to False. For Lab 3, this is fine-tuned so it is set to be trainable. Hope this helps.