I am using normal CNN model for time series classification, Where my dataset1 is of shape (28,9,1) and I trained my CNN model on dataset1, where now the input layer of my CNN model is (28,9,1). Now I want to fine tune my model on other dataset2, which is having a different shape (20,5,1). How to fine tune my model on this dataset2 which is having different shape compared to the dataset my model is trained on.
This is other doubt I am having, VGG model is trained on data (For example, image of shape 244x244x3), Now If I want to fine tune on my custom dataset( image of shape 124x124x3) , I can load VGG model with input shape (124x124x3) and fine tune on my data, that is fine but how is the input shape of the architecture is changed because VGG is trained on image of 244x244x3. So, the shape of weight matrix between input layer and next layer will be of some shape compatible with 244x244x3 image, Now If I am loading VGG with input shape (124x124x3), Does the weight matrix between input layer and next layer will be removed? and replaced by new weight matrix compatible with 124x124x3 image and Does this new weight matrix is also trained or it is frozen?
For both cases you are saying you have a pretrained model and want to change the input image size from the original one to a different one.
What I am thinking you can do, is crop or expand the image to fit the model’s requirements, drop some final layer(s) add other layers if needed, use the learned weights and fine tune further.
Theoretically one could remove the input layer and replace but I havent come accross such myself, and probably it could have implications to the whole downstream changing its entire structure (think of it as water pipes connected with each of different length and diameter). You can search in the web if this can be done or not but probably not.
@gent.spah explained very well what you would have to do. My key take-away from this is: the input has to be of same shape.
Got it, But My data is not images, my data is time series data, and some features are different from dataset2. So Just want to know how will the fine tuning on dataset2, effect the model. Will there be any chance of poor performance in model by doing like this?
Well the natural question here is dataset 1 to dataset 2 in any away related? If they are related most definitely you can use a lot of what was lear ed before, if they are not, still you can use some already learned features.
The transfer and fine tune learning works on using higher level features rather than details of images so in some way can be helpful.
The other aspect to consider is how much data you have on fine tuning.
If your dataset is of time series, for instance video clips with different frame counts each one, then you can consider using what’s called “ragged tensors”. THIS LINK will show you the documentation for tensorflow’s ragged tensors. May be that’s what you need.
With ragged tensors you can feed a model with, say, movie clips, where each movie clip may have different number of frames.
Another option is clipping and padding:
Continuing with the video clips example, lets say that your clips can go from 5 frames to 100 frames (some will have 5, some 20, some 40, etc). With clipping and padding, you can define that your ‘standard’ frame number is, say, 30. In this case, every video clip with less than 30 frames would have to be padded with zeros, and every clip with more than 30 frames would have to be clipped.
The clipping can be done by dropping frames in an interpolated fashion so that you can get a sample of each scene across the entire video clip. One handy function to do this provided by tensorflow is RESIZE IMAGE. This function has several methods of interpolation.
In summary, you have 2 options:
- Ragged Tensors, which is a feature provided by Tensorflow.
- Padding and Clipping.
Please share your thoughts,