throughout the first week, the instructor told that we have to keep the number of channels same but in this video when applying filters, he keeps changing the number of channels and making them bigger.

like he used 10 while input had 3, then he used 20 instead of 10 in the next and so on.

Hi @Sami_Ullah5,

In convolutional layers, the image height and width usually decrease due to filter size, stride, and padding, but the depth (number of channels) often increases and this is the aim of CNNs. The number of filters determines the depth. More filters allow the network to learn better features. Early layers capture simple patterns like edges and deeper layers learn more complex features, which is why increasing channels helps improve representation.

Hope that clears it up!

Btw, could you point out exactly where the instructor mentioned keeping the channels the same?

1 Like

My guess is that this is just a confusion between the *input* channels and the *output* channels. When you define a convolutional layer, you define the shape of the filters to be used: they have h and w dimensions but they need to match the number of channels of the input. So the *input* channels must be the same. Then each filter produces one output channel and the total number of output channels is determined by how many filters you choose to have at that layer. As Alireza mentions, the typical pattern as you go through the multiple layers of a ConvNet is that the h and w dimensions decrease and the channel dimension increases.

So the shape of the weights array W for a convolutional layer is:

f x f x nC_{in} x nC_{out}

Note that it’s not technically required that the h and w dimensions be the same, but every example Prof Ng shows us has “square” filters and typically the f value is an odd number. He does explain in the lectures why odd numbers for f are preferred.

1 Like

Thank you so much for clearing up the confusion.

2 Likes