Inception Module lack of Non Linearity

The inception module (from GoogleNet) does note have non linear activation function, like ReLu, applied in any point. And the inception network contains a lot of inception modules repeated along the network.

I understood from one of the initial courses of this specialization that the nonlinearity introduced by the activation functios is a major element in providing the capability of a neural network to learn complicated mapping from x to y.

How this lack of nonlinearity didn’t hurt the inception network perfomance?

How do you know for certain that it doesn’t include a non-linear function?
Please provide some details.

Mr. Andrew showed us that the inception network is formed mainly by the many repetition of the inception module.

This is presented in the slides 22 and 23 of the C4_W2.pdf. And I do not see any activation function as parte of the inception module (I do not see ReLu, sigmoid, tanh, or any other activation function).

So, I suppose the nonlinearity is introduced by other mechanism that I could not grasp. since, according to what was explained in the course, non-linearity is necessary for godd mapping representation and GoogleNet has good performance.

So, rephrasing my question: how the non-linearity is introduced in the inception module without making use of the tradional way of using activations functions for this purpose?

The non-linear function is built into each convolutional layer.

For reference, here is a slide from Week 1 where the convolution process is discussed in detail. This example uses ReLU activation.

Week 2 discusses applications of convolutions, their details are in Week 1.

But the figure that you posted refers to the MobiliNet case. My question is about the GoogleNet, which is presented in the slides 22 and 23 of the C4_W2.pdf

Anything that uses a convolution layer contains an NN with a non-linear layer.