Difference between '1*1 convolution' and 'pointwise convolution'

Hello everyone. From the Week 2 course I learnt 1*1 convolution in inception networks and pointwise convolution in MobileNets, respectively. But I didn’t see any difference between them and are they actually the same operations but just have different names in different convolutional neural networks? Many thanks in advance.

Yes, those are just two names for the same type of convolution.

Thank you for your reply

This is a very good question.

As Paul said, “1x1 convolution” and “point-wise convolution” are same thing. Here is an overview of “point-wise convolution”.

There are m filters (kernels). Each filter has the shape of (1, 1, ch). A filter is element-wise multiplied to input, and sums over elements. Finally, (1,1,1) element is created.
Isn’t this similar to w^Tx in a Dense layer ? If we simply replace “filters” in here to “units” in a fully connected layer, that’s the answer.

So, anther interesting aspect is,

“1x1 convolution” is equal to “point-wise convolution”, and “fully connected layer”.

I do not want to make you confused. And, you can forget this. But, here is the proof.

  1. prepare 2 filters (= 2 units)

filter_1 = np.arange(5, dtype=‘float32’).reshape(1,1,5)
filter_2 = np.arange(5, dtype=‘float32’).reshape(1,1,5)

  1. set weights of predefined filters (units) to conv2D and Dense layer.

conv = tf.keras.layers.Conv2D(filters=2, kernel_size=(1,1), input_shape=(inputs.shape))
fc = tf.keras.layers.Dense(units=2, input_shape= inputs.shape)

  1. then, feed input data to both, and check results.

ans_conv = conv(inputs)
ans_fc = fc(inputs)

As the result, we got the same outputs !

1 Like