In the `Example: Recognizing Images` video, how are the neurons looking for specific 'images'?

In Neural Networks IntuitionRecognizing Images, I’m confused at the images Professor Ng is showing. He said when you peer into individual neurons, you see the images below. However, my understanding is each neuron outputs a number into a vector a, not an image, so I’m confused at where these images are coming from:

My understanding: vector of 1,000,000 items (X) is fed into each neuron of layer 1; then layer 1 outputs a vector a (containing 1 number for each neuron of layer 1), into each neuron in layer 2, and so on.

Hi @YodaKenobi

In this example prof mean that in the first layers the NN detect the edges of image in deeper layer NN detects shapes or more complex (complicated)number like noise or eyes but if you mean that where are these images can from it can from the training set as you feed NN with number(batch) of training examples images …the output of each layer if you feed NN with 1 training example is a vector a you can convert these vector to image but you may not get the same results like in images and there are way to display what is the output of each layer if you feed NN with images but this way is complicated

Note if you feed NN with number(batch) of training images the output would be a matrix not vector

Thanks!
Abdelrahman

2 Likes

Andrew is just giving an intuition about what the layers of a CNN might be detecting. This is to help you understand the concept of how an NN works.

It’s not a specific example or implementation.

1 Like

Hello @AbdElRhaman_Fakhry , thanks for the response! Some follow up questions since I’m still a bit confused with your statement “…you can covert these vectors to image”:

  1. Say an image is passed in so vector x has 1,000,000 items
  2. It is input to each neuron in layer 1. Lets say there are 10 neurons
  3. In layer 1, an activation vector a1 is output, containing 10 numbers

How are those 10 numbers turned into an image?

Thank you!

Hi @YodaKenobi

I said that it wouldn’t be easy to turn activation into image for example there are thing called Activation mapping which give you a good intuition about what is output of an neuron could be and What do you learn and infer from these layers

if you want take a look the last cell in this note book Rice classification 🍚 | Kaggle

Thanks!
Abdelrahman