Improving the L-layer Deep Network further

After completing the final exercise in Week 4, I tested the model with my own images. In the process, I realized that the model correctly distinguishes between simple cat images and images with no animal in the picture, but when I tried to trick the model by using non-cat images containing faces of other animals. And I found the model fails to classify them correctly. I entered a Meerkat image
image
and it confuses with cat image.


Then I found it even cant distinguish between a cat and a dog.

I am wondering what can I do make this algorithm even more robust. Should I inject more such confusing images with labels in the training examples or do I need to design new features? Seeking guidance

@paulinpaloalto Need guidance.

Hello Pankit :slight_smile:

The models for this course remain of course very basic and of course, you will see huge improvements with just little more theory with the CNNs.

But assuming you wanted to do with only the tools you have, then your intuition is good and could be what you have to do in practice : you have trained a model, it looks good, but when you deploy it your client reports failures, so you examine where your model fails and think about how to improve it to fix the problem (what you will learn in course 2). So if Meerkats are misclassified you can train your model with more pictures including Meerkats. If it fails, then you will have to think about what could work (add more layers ? nodes ? is my test set different than my training set ?..)

I hope that helps :slight_smile:

1 Like

Hi Nicolas, is there anyway I can edit the training dataset and add more images to get the model use-to to confusing images? If yes how can I do it and it would also be helpful if can provide with code by which I can run my own train and test datasets. :slightly_smiling_face:

Of course, you can :slight_smile: Let’s have a look at the notebook:

The second cell is where you load the images and classes. The images are already split into test and training sets, with their corresponding classes. Tensorflow (and Pytorch etc…) have built-in tools to help you in creating randoms test and training sets with appropriate labels by specifying which folder contains class 0, which class 1 etc…

For sake of simplicity, let us just extend the training set with new pictures in the end (quite bad in practice, because now your training and test set does not come from the same distribution, but it’s only to understand how to code that :slight_smile: )
when you examine the type of the objects (add one cell, print(type(train_x_orig[index])) after the next cell for instance) you see that it’s a Numpy object. print(train_x_orig[index].shape) tells you that the image format is 64x64x3. The dtype is ‘uint8’ (8 bit integers).

So you can import your images using any library you want (PIL, scipy…) and convert them into a Numpy array. Then you have to decide to crop the image, resize it, or both to have a 64x64 format. Make sure that the channels are RGB (for OpenCV users, the import is in BGR format)

After that, you can’t expand a Numpy array, you have to make a new one. So you will create a template with np.zeros fitting the number of examples you have, specify new_array[:m] = train_x_orig, new_array[m:] = my_images and do the same with the list of classes.

And after that, I think everything’s ready to train further your model!

1 Like