I tried using my model to predict/classify, but it said:
ValueError: Input 0 of layer “sequential_2” is incompatible with the layer: expected shape=(None, 45, 45, 3), found shape=(None, 45, 3)
even though when I took the shape of the input, it was (45,45,3) not (None,45,3)
This is my code:
data=list(uploaded.keys())
image=cv2.imread(data[0])
img_reverted=cv2.bitwise_not(image)
print(np.shape(preprocess_input(img_reverted)))
preds = model.predict(preprocess_input(img_reverted))
print(‘Predicted:’, decode_predictions(preds, top=3)[0])
Hey @sickopickle,
By any chance, is the batch-size 45, and you perhaps are trying to include that in the shape of the input layer? Also, if possible, can you please share your code as an .ipynb
notebook, so that I can try to run the code myself and see what is the exact issue? Also, if you have your own dataset do provide that as well, so that I can load the files inside the notebook.
Cheers,
Elemento
Perhaps you can try to change the input data into
np.expand_dims(preprocess_input(img_reverted)), axis=0)
This should give you a shape of (1, 45, 45, 3). The 1
there represents one sample.
It looks like you are trying to run the model on a single input sample, but it expects a batch or minibatch, which is a 4D tensor. So you have to expand it as Raymond shows in order for it to have the first dimension be the “samples” dimension, thus converting it into a minibatch containing one element.
1 Like
Thanks! That was silly mistake lol
We saw examples of this usage in several of the assignments, e.g. Transfer Learning with MobilNet (C4 W2 A2). Search that notebook for expand_dims to see another example of how to use that function in a case like this.
Thanks for the help. I was also wondering how I split the train and test sets. In the course I took, they just used a pre-made function to do so, but I don’t know how to from scratch.
There are various ways to do that. In numpy you can use np.random.permutation to shuffle the index values and then select elements from an array. There is an example of how to do that in the Optimization Methods assignment in Course 2 Week 2. See the part where they select the random minibatches. But if you are working in TF, there are functions that will take care of that. There is an example of using image_dataset_from_directory to do that in that same Transfer Learning assignment (C4 W2 A2) that I mentioned in my reply above. Have a look!
There is also some sample code on this thread that shows how to use np.random.permutation to split up a dataset.
Thank you for the help. I ended up using the sklearn’s train_test_split method. I have another question which I will put on another post, but I will ask it here first.
I got this error:
ValueError: decode_predictions
expects a batch of predictions (i.e. a 2D array of shape (samples, 1000)). Found array with shape: (2, 108)
I am confused, because the array is of the required shape. Samples = 2, classes =108. My output is in the form of an array of one_hots, with zeros in every index but the class identified.
I am not familiar with decode_predictions. Why are they using 1000 in their error message? Is that just a generic thing? If not, then that might be one thing to consider in your analysis …
Looks like you are transfer learning a NN that was originally trained for 1000 classes to your case that has 108 classes. In this case, I think you can’t use decode_predictions
as it is intended for the original 1000 classes. You should know the class names of your own 108 classes instead of relying on decode_predictions
.