Can we feed Input images of different dimensions?

The videos I have gone through uses examples of only one image dimension …say 6464 3 or 128128 3.
What if I have images of different image dimensions ? Can I use them to train the cnn or we are only bound to use images of a particular dimension for that cnn

The image entering the model is constrained by the input layer fro that model, if you resize the image for that model then you can use different images as inputs, or if you build another model with different input layer size then you can also use different image sizes than those mentioned in the videos.

I think I framed the question incorrectly.

Actually I wanted to ask, if I can feed say a 64*64* 3 as well as a 128*128* 3 or any different dimensions of images to the same cnn or not.
For e.g if I am building a cat classifier, not necessarily all my data(images) will all be of same dimensions.
In that case, how can I build a cnn that learns from any dimension of image it is fed as input.

You have to have a preprocessing function that resizes the images to the model requirements, the model cannot change input size on the go.

1 Like