Understanding the output of tf.keras.backend.set_learning_phase(True) model = ResNet50(input_shape = (64, 64, 3), classes = 6) print(model.summary())

The output of below code gives me (the code is just running the model so I thought there is no issue sharing it here as it is not part of assignment however let me know if so I ll delete it).

My question:

I have pasted the snap shot of the output…In my mind I think when a batch of examples is passed through it goes through all the stages and then the next batch is passed… But this understanding does not match with the output here. I can map the padding and stage 1 but then the next stages which should get mapped like same batch should go to stage 2 …3 and rest and the output should map the output’s of these stages but I could do so only with stage 1 and per the code stage 2 should have outputs from something like conv > indentity and indentity but its not as shown below…

Please help clear my understanding.
Thanks

tf.keras.backend.set_learning_phase(True)

model = ResNet50(input_shape = (64, 64, 3), classes = 6)
print(model.summary())

This>

Layer (type) Output Shape Param # Connected to

input_1 (InputLayer) [(None, 64, 64, 3)] 0

zero_padding2d (ZeroPadding2D) (None, 70, 70, 3) 0 [‘input_1[0][0]’]
--------------------- Padding
conv2d_24 (Conv2D) (None, 32, 32, 64) 9472 [‘zero_padding2d[0][0]’]

batch_normalization_24 (BatchN (None, 32, 32, 64) 256 [‘conv2d_24[0][0]’]
ormalization)

activation_21 (Activation) (None, 32, 32, 64) 0 [‘batch_normalization_24[0][0]’]

max_pooling2d (MaxPooling2D) (None, 15, 15, 64) 0 [‘activation_21[0][0]’]
---------------------------stage 1
conv2d_25 (Conv2D) (None, 15, 15, 64) 4160 [‘max_pooling2d[0][0]’]

batch_normalization_25 (BatchN (None, 15, 15, 64) 256 [‘conv2d_25[0][0]’]
ormalization)

activation_22 (Activation) (None, 15, 15, 64) 0 [‘batch_normalization_25[0][0]’]
---------------------------------
conv2d_26 (Conv2D) (None, 15, 15, 64) 36928 [‘activation_22[0][0]’]

batch_normalization_26 (BatchN (None, 15, 15, 64) 256 [‘conv2d_26[0][0]’]
ormalization)

activation_23 (Activation) (None, 15, 15, 64) 0 [‘batch_normalization_26[0][0]’]

conv2d_27 (Conv2D) (None, 15, 15, 256) 16640 [‘activation_23[0][0]’]

conv2d_28 (Conv2D) (None, 15, 15, 256) 16640 [‘max_pooling2d[0][0]’]

batch_normalization_27 (BatchN (None, 15, 15, 256) 1024 [‘conv2d_27[0][0]’]
ormalization)

batch_normalization_28 (BatchN (None, 15, 15, 256) 1024 [‘conv2d_28[0][0]’]
ormalization)

Hi @Fas

As you know, ResNet uses both main path and skip connection paths. In your output, conv2d_28 is applied directly to max_pooling2d (from stage 1) that is a shortcut, and, conv2d_25 → conv2d_26 → conv2d_27 is the main path. Both paths are batch-normalized separately before merging in the residual block. This is why the mapping isn’t sequential (it follows parallel structure).

Hope it helps! Feel free to ask if you need further assistance.

1 Like