Outputs of first layers in CNN look like cat image and last ones no ; it should be the inverse

in the first colab of W1 conv net "Ungraded Lab using more sophisticated images…"we notice that output of the first conv layer looks like a cat when the output of the last conv layer not ; why is that ?

because In theory :

The outputs of the first layers in a Convolutional Neural Network (CNN) typically do not look like the original cat image. The early layers capture simple and local features, such as edges, corners, and basic textures. These features are abstract representations of the input image and may not resemble the original image to the human eye.

As the data passes through deeper layers, the network learns to combine and transform these low-level features into more complex and abstract representations. The final layers, which are often fully connected layers, produce high-level features that are used for making predictions or classifications.

1 Like

Well the conv model has to take an image and give a classification, so the input is an image from which step by step features; high and low level are learned. The image itself is gradually transformed into number by normally compressing it so when its comes to the final layer it will give a single output.

And yes in the previous layers because the image has not been though many convolution layers it is still similar to the original!

1 Like