Hi teaching team, just want to make sure if I understood this correctly.
In the lecture (week1 one-layer-of-a-convolutional-network ) delivered by Dr. Andrew Ng,
around time 2:05 he mentioned that one layer of CNN is the layer that turns 6 x 6 x 3 input to 4 x 4 x 4 output ?
Did he mean 4 x 4 x 2, instead ? because on the slide, he wrote down 4 x 4 x 2 ?
Thanks