Change pretrained MobilenetV2 model for monochrome input

Is the pre-trained model like that used for “alpaca model” suitable for classifying monochrome datasets? And if so, what to change for the input, and are there different rules for what to freeze and what to train?

Hi, Eduardo. Yes, that should work. The various shades of grey are colors. :nerd_face: But the key point is that any time you have a pretrained model, you need to get your input into the exact representation that the model was trained on. That means the same file type (PNG, JPEG or whatever) and the same resolution and same number of color channels. If you change anything about the format of the input, then transfer learning no longer works and all bets are off, meaning you are starting from scratch again.

Have a look at the documentation for the TF version of MobilNetV2. It looks like it was trained on 224 x 224 x 3 RGB images and it looks like it is sophisticated enough to upscale or downscale your input images to that form factor. They also comment that you need to feed everything through the specific preprocess_input routine, which renormalizes the pixel values. You’ll have to play around and see if it’s also smart enough to handle your images with one color channel. My guess would be that you will have to do the conversion from one channel to three channels as a separate preprocessing step of your own.