In the second week of the Convolutional Neural Nets course, Dr. Andrew Ng uses the “bottleneck layer”. But, we could use a 1x1x192 with 32 filters instead of 16, which would give the same output. Using it this way can reduce the computation cost further to just 4.8 million. So, the use of the bottleneck layer here seems a little redundant. Could someone please explain this?
Welcome to the community.
Please do not forget our real objective. That’s a convolution step between 2nd and 3rd boxes.
And, the reason to put a bottle neck is to reduce computational effort of this 2D convolution.
If we just use 1x1 convolution with 32 filters, we do not have a chance to apply 5x5 convolution, which should be a primary objective of this task.
Yes. I just got it when I went through the lecture again. Thanks for your help