Doubt during custom training

i am trying to do car image classification on a kaggle dataset
that contains 8K+ images and has 196 output classes
i coverted all the images to a
now when i am trying to train mobilnet by removing its last layer then my tarining is taking a long time
like this

please help me to figure out the problem

Hi Aditya,

It is great that you are trying to apply your knowledge to different use-cases. However this is out of the scope of the course so our priority is to attend first questions regarding the contents of the Deep Learning Specialization exercises.
I recommend that you review the architecture of the model maybe reading more literature.
Besides when iterations take so long that might indicate some convergence problem.

Best and happy learning,

Ok np mam.
Thank you.

Also note that 8K input images is a radically larger dataset than the ones they give us here in the courses. E.g. in the famous “cat/not cat” datasets back in Course 1, we had 209 training images and 50 test images. Training a neural network can be very time consuming precisely because it almost always takes a large training dataset to get good results. In fact you could legitimately wonder how we got such good results on the “cat recognition” problem with such a tiny dataset. Here’s a thread with some experiments to show that the 209 training + 50 test images are very carefully “curated” to get surprisingly good results with such a small training dataset.

Thank you, sir.
The problem is solved now.
Actually, the problem is not the size of the dataset because I was using GPU. Actual problem is that my mage data is in google drive and when I run it in google collab then due for some reason it is unable to fetch it . But as soon as I moved the dataset to a file inside google collab it worked.

Thank you sir for the thread,it is really amazing

1 Like