In this is course we learnt how the neural network compares or identifies a 2D image, by comparing lines at every pixel in the image during the initial layers, But during actual implementation I have seen we only compare several attributes of the image. ex: during facial recognition we only compare nose, eyes, eyebrows etc… why is that?
As When we doing neural network in first layer we considered with edges and in the next layers we considered Putting the edges together, that is, focusing on more accurate things in the picture And so on …so we in the actual implementation we compare several attributes of the image. ex: during facial recognition we only compare nose, eyes, eyebrows as we implement small neural network without CNN so we compare edges( nose, eyes, eyebrows )
I hope it answer your question,
please feel free to ask any question,
Abdelrahman
Thankyou! that cleared my query