Hello all.
I have several questions, which I might guess is something I misunderstood, but I am getting confused in this quite often so I tried to post it the best way I can
In the video "One Layer Convolutional Network) by the end of the video Andrew shows a summary of the dimmensions of each component of the convolutional layer. The number of filters is given by n_c^{[l]}. Reported to the image of the toy example he gave by minute 4:36 he says we have 2 filters.
However, as I understood by Question 3 in my quizz (given an RGB image 256x256 and a convolutional layer with 128 filters with 7x7 shape) the correct answer implies that n_c^{[l-1]} is not the number of filters but the depth of the given image. Isn’t this an ambiguity in the equations? The same can be guessed for my Question 4 regarding the input and output equations of the same summary in the “One Layer Convolutional Network”: the n_c^{[l-1]} is refered to number of channels of the previous “image”, while n_c^{[l]} regards the number of filters on the conv layer, right?
The bottom line is: is not the parameter n_c^{[l]} being ambiguasly used to describe both the number of filters in n_c^{[l]} (as andrew tells in the video) AND the number of channels of the previous layer in n_c^{[l-1]}? to (which in fact suggestive, since “n_c” can easily be read as “number_(of)_channels”?
I hope I have not make the understanding too complicated
Kind Regards.
Ricardo