Hello everyone!
In the second assignment of Course 4 Week 2, I realized that after tweaking MobileNetV2 for alpaca classification, the number of layers “seem” to have dropped from 157 to 8. I was expecting to see 158 layers since we removed the last layer and replaced it with average pooling and another output layer. However the following 8 layers happened to construct the whole neural network.
['InputLayer', [(None, 160, 160, 3)], 0]
['Sequential', (None, 160, 160, 3), 0]
['TensorFlowOpLayer', [(None, 160, 160, 3)], 0]
['TensorFlowOpLayer', [(None, 160, 160, 3)], 0]
['Functional', (None, 5, 5, 1280), 2257984]
['GlobalAveragePooling2D', (None, 1280), 0]
['Dropout', (None, 1280), 0, 0.2]
['Dense', (None, 1), 1281, 'linear']
So I would like to ask the following:
- What happened to all those layers from the base_model?
- What are “Sequential”, “TensorflowOpLayer” and “Functional” layers?
- Is the “Functional” layer the actual MobileNetV2, but with the details hidden?
- Why cannot we see the overall neural network architecture?
Thank you very much in advance and have an amazing day!