Along with the course I’m training and improving some models for the practical part.
When training a LoRA to fine-tune in stable diffusion, the Loss is not always decreasing, sometimes it increases a little, then it decreases, from one epoch to another.
In week 2 “Choosing the learning rate” I said that when the cost increases it is a sign that the learning rate is high. Is this still true for latent diffusion models?