Here, while testing the image classifier with my own images, it fails to classify them. I tested it with 3 pictures, two of cats (which were classified - 0) and one of dog (which was classifies as 1). How to improve it?
Your images must very closely match the characteristics of the images used in the training set.
The datasets we use here are very small compared to what you would need to get a classifer that works in general with a problem this complex. Prof Ng is making two points here: Logistic Regression is the first step and you can think of it as a “trivial” Neural Network with just the output layer. Then he’ll show us how to build a real multilayer Neural Network to solve the same problem and we’ll see that it works significantly better with the same training data. You’ll see that in Week 4 of the course.
But even with the full NN solution, it still doesn’t work well on general images because the training set is so incredibly small with only 209 training samples and 50 test samples. If you look at the famous Kaggle Cats and Dogs challenge, they have a dataset with O(10^4) samples. In fact, you can even turn the question around and ask how it manages to do so well given the incredibly small training set. It turns out things are pretty carefully curated to get that result. Here’s a thread which discusses that point and shows some experiments, but the best idea is probably to “hold that thought” until you get through Week 4 of the course and then come back and read that thread in detail.