ResNets50 fails on "my own image"

The pre-trained ResNets50 model in Lab 1 Week 2 has a putative accuracy of 95%, but it does not perform well on my own pictures. For example, this is the output for the image shown

Input image shape: (1, 64, 64, 3) Class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = [[5.5017262e-07 3.0772483e-07 5.2745468e-06 9.9707901e-01 2.8557680e-03 5.9126767e-05]] Class: 3

the model also fails with other images. A “one” sign is classified as class 0.
After I trained the model for 12 epochs (reaching accuracy on tests of 97%) the performance on my own images was equally poor.
The elementary logistic model of week 1 does much better. !!>??

Yeah same I tried a couple of own images with homogeneous backgrounds, gives terrible results

Same, tried a couple of different pictures and gave up on it.

Same here. Tried multiple images with all possible rotations of 90 multiples (e.g. 90, 180, 270). All yield poor results.

Hi, Same here. Can a mentor please help out with this?

Answering my own question. After I progressed to the ResNets50 model (Lab, Week 2), I applied it to a problem of my own (a categorical with 10 classes). The performance was poor, both in training and testing, until I removed a few of the deepest layers in the model, reducing the number of parameters from 25 million to about 10 million. With the smaller (shallower) model, I got to 80% accuracy in training, a bit worse in testing. In short: one of the reasons these models fail is what Prof. Ng defined as “overfitting”. Reducing the number of “neurons”/parameters might be a way to improve performance.