# The Basics of ConvNets Quiz: #1 Filter matrix given

Hello!

I am confused with the first question of the quiz with the following filter matrix:

I answer twice and both were unexpectedly wrong, as it seemed an easy task from the first sight.
May you share any intuition how you figure out what the filter does. I was pretty sure it makes a 45° turn from the symmetry it has, but this was not a right answer.

Any ideas ?

Hi Viktoriia,

First look only at the largest positive and negative numbers in the filter. What do you think they do?

Then look at the other numbers. Do they support what the largest numbers are doing one way or another?

Good luck!

1 Like

Hello @reinoudbosch !

Thanks for the quick reply. So we see the largest negative and positive numbers are in the middle of the matrix and yes in a certain way they are supported by the neighbouring ± ones. Also it is vertically symmetrical so it will give 0 in the output if a piece of matrix is vertically symmetrical.

I did calculated several examples, to gain some intuition: Here they are:

• First example like from the Andrew’s video: vertical filter sensitive to any kind of vertical “gradient”.

then I looked at filter

• from the quiz on the same matrix:

and we see it is clearly sensitive to vertical edges but gives more info about the edge in terms of contrast mb???

• Then I changed slightly the matrix to have also horizontal edges

and here to my mind came an analogy with gradient, it kind of feels the edge and whether it transits from positive to negative or vice versa…

• Finally I did with the filter from the quiz:

So both filters are sensitive to vertical edges, but the second looks more precise and sensitive to the contrast I guess.
Any further thoughts ? How about horisontal symmetricity of absolute values ? What does it shows ?

Hi Viktoriia,

My reading is as follows. In this filter, the largest values are clustered in the center, so the focus of the resulting activation lies on what happens in the central pixels. The values around the central pixels serve to broaden this activation in a limited way to neighboring elements in the resulting output matrix. Why this would make sense in practice depends on the training data used and the architecture of the neural network, as the actual values in filters are calibrated on the basis of the data that is run through the network. Maybe some activation of neighboring elements supports the propagation of relevant activations through the network.

Note that your third and fourth calculations are not fully accurate which may distract somewhat from what the filter does. Note also that your fourth calculation shows only zeros in the first and last columns. This means that when the filter is aligned with either the first or the last column, the feature that is to be extracted by the filter does not occur for any of the rows of the picture. This provides a clear indication of which feature is extracted by the filter. The intermediary values in your examples result from the feature occurring in opposed directions.

Does this make sense to you?

Hello Reinoud !

Thanks for sharing your point of view ! I agree that the resulting activation depends mostly on the values that are in the center I also agree with the fact that it depends of course on the architecture of NN.
You are completely right I did three small typos in the last two resulting matrix while typing them.

However in my humble opinion the most important fact is that there is a vertical symmetry of this filter and values on the right are negatives of the left. After performing calculations by hand it became clear that it is a `(deleted)` filter.

Thanks for cooperation !

Hi Viktoriia,