Here we are shown how to do PEFT and LORA with google/Flan-T5 model using the HuggingFace Transformers lib, which makes it easy to fine-tune a model using TrainingArguments and Trainer.train paradigm. But how do we fine-tune GPT3 with PEFT / LORA since I dnt see this being supported by HuggingFace lib. I know we can instruction fine-tune GPT3, but that is not the same as PEFT, which is updating model parameters. I see this model in HuggingFace models repo -openai-gpt, but this not quite the same as gpt3, I believe. Please explain.
Hi!
OpenAI’s GPT3 is available for fine-tuning only via their API fine-tuning. OpenAI’s models are not available to be downloaded at this point. You may want to see if your task can be properly implemented by using the OpenAI’s fine-tuning exposed API.
1 Like
If you would like to do something like this on a model that rivals GPT-3 however, you do have that option. There’s quite a few LLAMA type models releasing these days that you could try to train instead, like Alpaca and OpenLLAMA perhaps?
1 Like
Other good options are: Bloom, Falcon, Mosaic. You may want to check them out. They are very highly rated in leaderboards.
2 Likes