When we upgrade to a better GPU (V100 32G), how should we adjust the training parameters? I have come across some issues

The GPU memory in the course can only be allocated up to around 2GB. So, how can I optimize the parameters to improve utilization and, consequently, enhance training efficiency?
I tried adjusting per_device_train_batch_size=2, but it resulted in an error.

Also, if I want to try fine-tuning larger models, do you have any recommendations?