EfficientNet video

When referring to the width of a NN it this video, does this mean the average/max number of channels? Or does it mean N_h or N_w?

Thanks in advance :slight_smile: :smiley:

Normally the “width” of a NN is the number of neurons in a given layer. But for ConvNets, that’s a bit harder to understand. Can you give a time offset into that video at which Prof Ng is discussing the point that you are asking about? It would help me to watch it again to make sure I’m getting what you’re asking about.

Thank you for your reply :slight_smile:

https://www.coursera.org/learn/convolutional-neural-networks/lecture/ZmOWP/efficientnet at time 1min07seconds.

I guess “width” here is just a general idea of the max nb of nodes you might have in any given layer of your NN?

Yes, that’s exactly the meaning of width that Prof Ng is discussing: the total number of neurons in the various layers of the nets. In the case of the feed forward layers that you typically have in the last few layers of a ConvNet, that it easy to see. In the case of a convolution layer, you have a bit more flexibility in how you allocate those neurons: the total number is h * w * c, of course. So you can always add or shrink the number of neurons by increasing or decreasing the number of output channels at a given layer (c is a hyperparameter, meaning a value that you simply choose at each layers). Or you can add pooling layers to decrease the number of neurons or use larger stride values than 1 and “valid” padding. If you want to keep the number of neurons high, use “same” padding and stride = 1.