About the use of the layer.Flatten

In the last assignment we use a pretrained model as a feature extractor and a classifier, without putting a Flattten layer among them.

class ResNetModel(tf.keras.Model):
    def __init__(self, classes):
        super(ResNetModel, self).__init__()
        self._feature_extractor = hub.KerasLayer(MODULE_HANDLE,
                                             trainable=False) 
        self._classifier = tf.keras.layers.Dense(classes, activation='softmax')

    def call(self, inputs):
        x = self._feature_extractor(inputs)
        x = self._classifier(x)
        return x

I believed that it is always neccesary to insert that Flatten() layer. Can anyone explain me why it is not necessary in this case?
Thank you in advance for the answers.
Ps: I meant to put this question en the course Tensor Flow Advanced Techniques Course 2 week 4, but I failed :slight_smile:

Hi, @Hermes_Morales_Gross !

I don’t know if there is an error in the code you posted but I can see a flatten layer right before the dense layers. Is that what you meant?

Ps: I meant to put this question en the course Tensor Flow Advanced Techniques Course 2 week 4, but I failed :slight_smile:

Don’t worry, I just changed it :wink:

Thank you @alvaroramajo for your answer. I have edited the question and I put the code as it is originally posted in the assignment (C2W4). It does not have a Flatten layer.
When I modified the code to improve the classifier, I added it because it seemed to me that it was necessary.
Now, I have actually tried without it and it is not necessary. Then, I wonder how do I know if it is a must to put a Flatten (or GobalAveragePool) layer after the pretrained models and before the classifier.

You can always check with model.summary() if the previous layer is already flattened (that’s what I think it’s happening here). Anyway, depending on the framework sometimes it implicitly uses flatten before the dense layer if the rank of the input is higher than expected, i.e., a convolutional output.

Thank you @alvaroramajo for your answer. I tried with model.summary() and nothing interesting appeared. :slightly_frowning_face:

Model: "res_net_model_2"
_________________________________________________________________
Layer (type)                Output Shape              Param #   
=================================================================
 keras_layer_2 (KerasLayer)  multiple                  23561152  
                                                             
 dense_2 (Dense)             multiple                  208998    
                                                             
=================================================================
Total params: 23,770,150
Trainable params: 208,998
Non-trainable params: 23,561,152

That may happen when you don’t have a specific input layer with a fixed dimensions or when using pre-defined groups of layers