C1_W4_Lab_1_image_generator_no_validation - no humans recognized

Running the model prediction on a number of files I downloaded from Pixabay, I see that no human images are recognized.
I didn’t make changes in the colab initial files.
What may be the reason and how may I fix it?


1 Like

The main reason is probably that those images from pixabay are not similar (not from same distribution) as the ones it was trained from originally.

It would be good to train the model further on with a set of images from pixabay or include them in the training set with the other images together.

1 Like

Thank you.
Actually, I did exactly what Laurence did (he also used Pixabay files). I have to say it is a bit disappointing - I used 6 human photos and nothing was recognized, while the training accuracy is more than 98%

Seems like the images you’re using for testing are not similar enough to the images used for training.

Maybe you could post a few examples as thumbnail images? Otherwise all @gent.spah and I can do is speculate.

1 Like

Hi, attaching …

Are those training images, or the one’s you’re trying to classify?

theses are new ones from the site used by Laurence

What we asked about is whether those images compare similarly to images from the training set.

I’m not sure i understand the question. Could you please clarify?

The essential concept we’re suggesting is that you can only make good predictions if the images you’re using are statistically similar to the images that were used in training.

For example:

  • If the training images were all black-and-white images that were captured in the daytime, that model isn’t going to be able to make predictions using color images that were photographed at night.

  • If the training image backgrounds were all uniform neutral colors, then images with complex backgrounds would not be handled very well in making predictions.

This just illustrates the general concept we’re asking you about.

So what we’re suggesting is that you do some comparison between the training set images and the images you’re using for predictions.

Do you mean visual comparison or maybe there are automatic tools for that?

I mean your own personal observations. Did you look at whether the images you’re using for predictions are similar to the training set?

I don’t think they are similar, but I didn’t see all the training images. Doesn’t matter, I’m already on the second course and I hope I will get an explanation of the issue during the following lectures :). Thanks a lot for your time