Week 1 - Convolution_model_Application (Understanding)

I have completed the assignment successfully, but appreciate help on understanding the rationale behind some of what I did:

  1. Zeropadding2D - I do not understand the official documentation. Why does the last dimension remain as 2 if it is padded. Should it not increase to 4?

https://www.tensorflow.org/api_docs/python/tf/keras/layers/ZeroPadding2D

  1. BatchNormalization - Why Axis = 3?

Thank you for your help

Welcome to the community.

I think a confusion comes from the difference in image data format for Tensorflow and Pytorch, and interchangeability among two.

We are using Tensorflow. So, default image format is (batch_size, height, width, channels). Pytorch is another machine learning framework on Python. Its default image format is (batch_size, channels, height, width). To support both format, Tensorflow has an option of “data_format” to specify either “channel_last (default)” or “channel_first”.

ZeroPadding2D is an excellent function to add “padding” to “height” and “width”. Remember that you worked on implementing “zero_pad(X,pad)” at Exercise 1 for W1A1. That added padding to “height” and “width”. That’s the equivalent function to ZeroPadding2D, assuming that “channel last”. In both cases, if the case of “channel_last”, the first dimension (axis=0), and the last dimension (axis=3) are unchanged.
So, as the answer to your first question, the last dimension (channel) stays “2”.

For the 2nd question, I think that is also related to this “channel_last” and “channel_first” format. As we expect BatchNormalization for channels, we explicitly specify “axis=3” to say “this is channel_last format data”. (Of course, BatchNormalization does not know what is “channel first”… So, just give the axis number. :slight_smile: )

Hi Nobu,

Thanks so much for your clarification. It is clear now!

Best regards,
Antonio