Loss increasing when training diffusion models with lora

Along with the course I’m training and improving some models for the practical part.
When training a LoRA to fine-tune in stable diffusion, the Loss is not always decreasing, sometimes it increases a little, then it decreases, from one epoch to another.

In week 2 “Choosing the learning rate” I said that when the cost increases it is a sign that the learning rate is high. Is this still true for latent diffusion models?

Yes, it is true for latent diffusion models.

1 Like

With this couse can i able to know , how to use the diffusion model and train that models , Machine learning specialization begginer

No, the Machine Learning Specialization doesn’t cover diffusion models.