I did not get the same number of parameters in the FCN8 model. I don’t know whether any layer is set wrong. Could anybody help me check it?
The output shape are the same, but the parameter numbers do not match the expected ones.
Layer (type) Output Shape Param #
=================================================================
input_10 (InputLayer) [(None, 64, 84, 1)] 0
zero_padding2d_8 (ZeroPaddi (None, 64, 96, 1) 0
ng2D)
conv2d_74 (Conv2D) (None, 64, 96, 32) 224
leaky_re_lu_74 (LeakyReLU) (None, 64, 96, 32) 0
conv2d_75 (Conv2D) (None, 64, 96, 32) 6176
leaky_re_lu_75 (LeakyReLU) (None, 64, 96, 32) 0
max_pooling2d_37 (MaxPoolin (None, 32, 48, 32) 0
g2D)
batch_normalization_37 (Bat (None, 32, 48, 32) 128
chNormalization)
conv2d_76 (Conv2D) (None, 32, 48, 64) 12352
leaky_re_lu_76 (LeakyReLU) (None, 32, 48, 64) 0
conv2d_77 (Conv2D) (None, 32, 48, 64) 24640
leaky_re_lu_77 (LeakyReLU) (None, 32, 48, 64) 0
max_pooling2d_38 (MaxPoolin (None, 16, 24, 64) 0
g2D)
batch_normalization_38 (Bat (None, 16, 24, 64) 256
chNormalization)
conv2d_78 (Conv2D) (None, 16, 24, 128) 49280
leaky_re_lu_78 (LeakyReLU) (None, 16, 24, 128) 0
conv2d_79 (Conv2D) (None, 16, 24, 128) 98432
leaky_re_lu_79 (LeakyReLU) (None, 16, 24, 128) 0
max_pooling2d_39 (MaxPoolin (None, 8, 12, 128) 0
g2D)
batch_normalization_39 (Bat (None, 8, 12, 128) 512
chNormalization)
conv2d_80 (Conv2D) (None, 8, 12, 256) 196864
leaky_re_lu_80 (LeakyReLU) (None, 8, 12, 256) 0
conv2d_81 (Conv2D) (None, 8, 12, 256) 393472
leaky_re_lu_81 (LeakyReLU) (None, 8, 12, 256) 0
max_pooling2d_40 (MaxPoolin (None, 4, 6, 256) 0
g2D)
batch_normalization_40 (Bat (None, 4, 6, 256) 1024
chNormalization)
conv2d_82 (Conv2D) (None, 4, 6, 256) 393472
leaky_re_lu_82 (LeakyReLU) (None, 4, 6, 256) 0
conv2d_83 (Conv2D) (None, 4, 6, 256) 393472
leaky_re_lu_83 (LeakyReLU) (None, 4, 6, 256) 0
max_pooling2d_41 (MaxPoolin (None, 2, 3, 256) 0
g2D)
batch_normalization_41 (Bat (None, 2, 3, 256) 1024
chNormalization)
=================================================================
Total params: 1,571,328
Trainable params: 1,569,856
Non-trainable params: 1,472
Besides, I tried to change the kernel_size from the Conv2DTranspose in the fcn8_decoder function. However, if I change it to a non-square shape then the model will not work. I might not quite understand how the kernel size being calculated here. Could anybody tell me how you set the kernel_size here? Thank you.