C4 W2 A2 Residual Networks (Optional/Ungraded Exercise): Low Accuracy on user images

To test the ResNet we’ve implemented in Week 2’s programming Assignment 1, I used images of my own hand. The model performed poorly. And, given the text in the next cell of the notebook. It seems the author of the assignment anticipated this. My images were against a clear light background, but still the model did not perform well. The author hints that it might be related to some distributions. And, prompts us for a possible solution. I would like to ask:

  1. Why do you think that the model might be performing poorly?
  2. What are the distributions that the author might be hinting at?
  3. What would be the potential solution to address model’s poor performance on user data?

For the third question, one solution that comes to mind is that we use transfer learning to train final few layers on user data. What do you think?

Training a model that is successful at image recognition tasks takes a much larger dataset than we can afford to use here because of the limitations of the Jupyter Notebook environment provided with the courses. We are limited both in terms of memory space and cpu/gpu. You’ll have much better luck if you use the Transfer Learning approach starting from a “real” model that has been trained on a “real world” dataset. TF/Keras provides a bunch of pre-trained models. Have a look at the catalog.

Then take a look at the second assignment here in DLS C4 W2 about Transfer Learning with MobilNet. In fact you could use MobilNet as your base model. Try to adapt that notebook to your image task in a similar manner to the way they use it to recognize alpacas.

1 Like