I thank the DeepLearning.ai team for presenting the wonderful course “Generative AI with LLM”.
I have a question: can we **fine-tune the Flan-T5-xxl**
model with QLoRA + Optimum + Deepspeed +accelarate + PEFT library? In order to fine-tune more data effectively faster.
Thank you
You want to use several techniques at the same time to fine tune, am I understanding right?
1 Like
Yes sir you got it right, am familiar with QLoRA with PEFT but I don’t have any idea that how to integrate it with optimum and deepspeed.
I dont know either how to put them together and I also have doubts if its worth the gain of putting them together rather than performing 1 of them at its best.
1 Like
When we scale our model to half precision that is FP16, With PEFT,
Does it makes us use of QAT( Quantization Awareness Training) if not how do we perform?
If its higher than 32bit before, it is somekind of quantization.
How to Perform fine tuning with QAT from FP32 to FP 16 ?
Havent done it, but first you need to quantize it and then do the fine tuning.
1 Like
How do we quantize LLM? any source that I can follow ?
You can search about it in google, the MLOps specialization gives some guidance about quantization!
1 Like
Okay thankyou sir, for your attention till now,
Will there be any additional modules to be added to this, where pertaining of our own model is taught from scratch