# Course 1 : Week 2 - Part 7 - Test with your own image

So I have completed the other 6 parts in the Logistic Regression with a Neural Network mindset and I wanted to test images from my own. However, the scipy library is no longer support the module ndimage.imread so I tried different method like this, but sometimes I get mostly wrong predictions for my cat image. I tried to resize my image to ( 64 ,64 ) and flatten it , I don’t know where I was wrong since the model keep predict real-cat picture ‘non-cat’ type .

Here is my code for Part 7:
#Preprocessing the image

num_px = 64

my_image = ‘cat_6.jpg’

# use PIL to open and reshape image

image = Image.open(my_image).resize(size(num_px, num_px))

print(image.format) # None

print(image.size) # (64,64)

print(image.mode) #RGB

my_image = np.array(image, dtype=float) / 255 # convert to numpy array and scale values

my_image = my_image.reshape(1,-1).T # reshape and transpose

print(my_image.shape) # I got (12288,1)

my_predicted_image = predict(d[“w”], d[“b”], my_image)

plt.imshow(image)

print("y = " + str(np.squeeze(my_predicted_image)) + “, your algorithm predicts a “” + classes[int(np.squeeze(my_predicted_image)),].decode(“utf-8”) + “” picture.”)

**Here’s what i got **

Hi @Nhattan,

Did you try running this on your local machine ?

Most likely the problem is just that the model we trained here does not generalize very well. That is for two reasons:

1. Logistic Regression is just not powerful enough for this complex a recognition task: it can only do “linear separations” in the input space. We’ll get better results in Week 4 with real Neural Networks, but then we still don’t get a really general model because …

2. The dataset here is tiny compared to what you would need to really get a general model for as complex a recognition task as this is. We have 209 training samples and 50 test samples. For one comparison, take a look at the Kaggle Dog and Cat dataset and it has 25k samples. The problem is that they are severely limited by the constraints of the online notebook environment of the course: we don’t have GPUs available to do the training, so we have to keep things small.

In fact, you can flip the question around and ask how they were able to get such good results on the test data here. It turns out that the datasets were “curated” pretty carefully to work this well. Here’s another thread that shows some experiments with the “balance” of the datasets.

Thank you so much for your answer! I’ve been stuck on this for a while and can’t find the answers anywhere else