GoogLeNet for image classification

Hello mentors,

I have a question about a part in the (Inception network) GoogLeNet architecture ,

GoogLeNet contains 1x1 kernels which the researchers add it before large kernels like 5x5 to reduce the operations needed to compute.

can we consider this bottleneck 1x1 kernels as a dimensionality reduction technique in image processing?(since we are talking about dimensionality reduction in this week).

and Thanks in advance,

Hi @naser97!
Welcome to discourse and thanks for your question.

Generally we cannot say that the 1x1 kernels are a dimensionality reduction technique. When defining less filters in the 1x1 layer vs the amount of channels in the previous layer then the number of channels will shrink allowing you to save on computation in some networks. But you could in the same way increase the number of channels as well or keep them the same.

The following video explains it quite well and gives the following explanation on the 1x1 layer:
a one by one convolutional layer is actually doing something pretty non-trivial and adds non-linearity to your neural network and allow you to decrease or keep the same or if you want, increase the number of channels in your volumes

In the subsequent videos from the link above the inception network is also covered.

Hope this helps and Happy Learning!
Maarten