LoRA compared to full fine tuning

During training, does a LoRA-adapted model require less computation in the forward pass compared to a fully fine-tuned model?
My understanding is that the real computational savings come from the backward pass rather than the forward pass.

week-module-2

Well, only the LoRA part parameters change; the other ones are static, so the parameters on the model itself are not changing. The only part that is computationally heavy here is LoRA and of course the backward pass is more demanding than the forward pass.