Lab 2. Training params for fine-tuned models from AWS

After we kind of fine-tune the model with some trainings params we asked to download the really fine-tuned model from AWS. But that real model was likely fine tuned with the different parameters.
I would be very interested to see with what parameters both full and PEFT fine-tuning was done.
Actually I found that AWS Sagemaker lab, which is free, offers more performant environment than a lab so I could try to make a real training there.

Also this course does not explain anything about Trainer API and how to configure and use it. Is there any more advanced courses?

You may try some of the free short courses that we have here at DLAI maybe they offer some alternatives.