Parameters for Fine-Tuning locally

I would like to see how Fine-Tuning would work on a local machine rather than using a checkpoint.

For that to work, what parameters should I use for TrainingArguments()?

Here are defaults from the lab for PEFT:

peft_training_args = TrainingArguments(
learning_rate=1e-3, # Higher learning rate than full fine-tuning.

At the very least, I would remove the logging_steps and max_steps here since that limits your step total to 1 and logs every single one. I imagine this was put in for demonstration purposes and I think the defaults are quite reasonable.

For epochs, prowling the internet tells me that there’s a bit of a range of what people actually use. There are some that mention that quite often 4 is enough, while others will do 3, check the performance, and do 3 more until they reach their desired performance or hit diminishing returns or big performance loss.

I’m not so sure about the learning rate though. Is that your comment? If it’s not, then I’d try leaving it like that.