Is QLoRA a form of QAT?

In QLoRA the base models parameters are quantized and frozen, and a small number of trainable low rank adapters are added.

So, want to check if my understanding correct.
Is QLoRA a form of Quantization Aware Training (QAT)?

Thank you :slight_smile:

Yes it is a QAT related method.