C3_W1_Transfer Learning

I want to implement the following model, but I don’t know how to use stacking on different model using
transfer learning.

Have you seen flatten and concatenate layers?
You could feed the concatenated output from individual models to simultaneously predict both outcomes. See multi task learning for the details about level 2 stacking.

1 Like

@balaji.ambresh let say I don’t want to do level 2 stacking and I am only interested in 3 class classification problem, I am stuck on level 1 stacking, I am unable to figure out that how can I implement this because conceptually I am getting this problem but for implementation it is confusing.

@balaji.ambresh I got it. thank you so much for these hints actually I was trying to use both VGG-19 and DenseNet-121 simultaneously (I thought there would be some option in tensorflow) but now I figure out that I have to create both model separately and then flatten the output and then concatenate and then feed the to any classifier prediction. Am I right?

@balaji.ambresh I have one more question: Like mobilenet_v2 takes (160,160) input size and inception_v3 takes (150,150) input size so I have to create 2 different dataset for each model or there is some other solution because if I create two different dataset using:

train_dataset = image_dataset_from_directory(directory,







then while creating other dataset there is change in dataset due to shuffle so how I can solve this problem?

  1. Use both models as feature extractors. Set include_top = False to get rid of the prediction layer. Flatten output of each model output and then concatenate them to feed into the next layer.
  2. Since you’ll be using these models as feature extractors, specifying input_shape for each model is allowed i.e. a single dataset is sufficient.