Lab 2 - What training parameters were used to full train the LoRA tuned model?

I am trying to replicate the results of the pre-trained LoRA optimized model in lab 2 (in my own jupyter environment), but I am not having any success. So far I have tried

  1. setting the training set to the full training set instead of every 100th sample
  2. setting max_steps to 100, max epochs to 5

I am next going to try setting the learning rate to 1e-5 instead

Does anyone know what the actual training parameters were to make this model?

You mean fine tuning the model before PEFT? We as mentors have only the info that you also have in the class, I would suggest to check section (for the parameters given):

2.2 - Fine-Tune the Model with the Preprocessed Dataset

But it also mentions that to fully fine tune the model it will take a few hours so its giving you the final result of the full fine tuning ready to use below!

no, I mean fine-tuning the model using PEFT. I know it’s going to take longer, but I wanted practice actually knowing what parameters to use to train a useful model, not just download a pre-trained model.

I ended up doing the following with good results:

  • switch to a computer with GPU and modify code to use gpu (involved putting .to('cuda') at the end of the model definitions and the tokenizer definitions
  • learning rate of 1e-4
  • no max steps
  • 5 epochs
  • full training dataset instead of only 1%
  • logging_steps=100 to reduce console output

This ended up doing 7790 training steps and took an hour and a half using a GPU. Used a g2-standard-8 vm, nvidia_L4 GPU, and 200GB SSD on google cloud.



---------------------------------------------------------------------------------------------------
BASELINE HUMAN SUMMARY:
#Person1# teaches #Person2# how to upgrade software and hardware in #Person2#'s system.
---------------------------------------------------------------------------------------------------
ORIGINAL MODEL:
#Person2# is considering upgrading #Person1#'s system to make up their own flyers and banners. #Person1# suggests adding a painting program to the software and upgrading the hardware. #Person2# suggests adding a CD-ROM drive.
---------------------------------------------------------------------------------------------------
PEFT MODEL: #Person2# wants to upgrade #Person2#'s system and hardware. #Person1# recommends adding a painting program to #Person2#'s software and adding a CD-ROM drive.
1 Like