Transferlearning with images of different input size

In the assignment it says: “After submitting your assignment later, try re-running this notebook but use the original resolution of 300x300”.

I was wondering how that worked, because if you change the input_shape of the inception_v3 model, the numbers of parameters for all the layers will be different… so how come that you can still use the downloaded inception V3 weights?
If you change the input_shape should you not retrain the whole model again?

The number of model parameters will be the same for InceptionV3 for input shapes (150, 150, 3) and (300, 300, 3). Please see deep learning specialization to learn more about convolution and batch normalization layers.

Do see this page on transfer learning and fine-tuning

Ah, that’s right! With Conv networks, the filters represent the parameters, so that number does not change when the image size changes. Thanks for the explanation.