C2_W2_Lab_1 1 epoch is taking 100sec


yesterday I was working with C2_W1 graded assignment and training of 1 epoch took me more than 8 minutes , so the full training of the network took more than 2 hours, I thought that’s due to my network architecture or just the immense load of the pictures- 20000.
But today I’m working on C2_W2 Lab 1 and training of 1 epoch (so only 2000 images) takes 100sec, while in the lecture video, it takes only 10sec

I regularly get notification that " Cannot connect to GPU backend

You cannot currently connect to a GPU due to usage limits in Colab."

Could this be the reason for the really slow performance?
Is there a way to improve the performance?
I didn’t get those issues doing the C1 Labs and Assignments.
Thanks in advance

GPU makes model training fast. By default, a free google colab account has 2 CPU cores (Use os.cpu_count() to find this). This explains why training is slow.

For about 10$ / month, you can upgrade to a pro version which provides a higher GPU quota.

Thank you for information