Welcome to discourse and thanks for your question.
Generally we cannot say that the 1x1 kernels are a dimensionality reduction technique. When defining less filters in the 1x1 layer vs the amount of channels in the previous layer then the number of channels will shrink allowing you to save on computation in some networks. But you could in the same way increase the number of channels as well or keep them the same.
The following video explains it quite well and gives the following explanation on the 1x1 layer: a one by one convolutional layer is actually doing something pretty non-trivial and adds non-linearity to your neural network and allow you to decrease or keep the same or if you want, increase the number of channels in your volumes
In the subsequent videos from the link above the inception network is also covered.