Hi,
I have successfully submitted the programming assignment. This is a conceptual doubt that I encountered while completing the assignment.

We constructed the Identity block keeping in mind that the dimension of tensor coming out of main path must be same as of tensor coming via skip connection that’s why we didn’t put any Conv layers in the skip connection, but I noticed that while we are making sure that height & width are of the same dimension, the number of channels are not equal. So, my doubt is if all the dimensions of the both tensors are not equal, then how are they getting added together using Add() without any errors?

It should be noted that in Convolutional Block, in the code, it has been made sure that all the dimensions (including # channels), of both the tensors, are equal.

Let’s say, the input tensor X has the shape - (m, n_H_prev, n_W_prev, n_C_prev)
And we have defined, X_shortcut = X
=> Number of channels in X_shortcut = n_C_prev
Further, the final component of main path is defined as -
X = Conv2D(filters = F3, kernel_size = (1,1), strides = (1,1), padding = ‘valid’, kernel_initializer = initializer(seed=0))(X)
=> Number of channels in the tensor coming out of main path = F3

So, if F3 will not be equal to n_C_prev, then the shape of tensor coming out of main path will not be same as of tensor coming via skip connection.

And yeah, I checked again and I understand that, in the example given in the programming assignment, the values of F3 are kept such that this issue doesn’t occur, but it felt risky to track the shape of the tensor flowing through the neural network and then adjusting the parameters of the Identity block function so that the shape doesn’t mismatch.