Hello,
So, I’ve done the exercises for the Week 4 assignment, gone through the algorithm, and got the cat classifier to work successfully.
But… despite the cat classifier predicting the images correctly most of the time, I don’t understand what magic the neural network is doing in front of me.
In the assignment, we do an image to vector conversion, where we essentially break down the picture into 64 (image width) x 64 (image height) x 3 (RGB). This becomes a column with the pixel information from 0 to 255. The graph shows the pixels laid out like red pixels are first, then green, then blue.
At this point, we’ve deconstructed the image into color numbers into a single column.
I know that in the lecture, Andrew was mentioning that neural networks would look at an image and try to piece together the image from small strokes to big pieces like an eye or a mouth.
-
How is it that if we’ve deconstructed the image to be a column of numbers that the neural network can piece together the strokes and lines?
-
How can we visualize what the neural network has learned is a cat? We can hypothesize that it’s learned to identify pointy ears, or whiskers, or round eyes. But is there a way we can visualize this?
Hi, Irene.
These are great questions worth discussing! I totally agree that it seems like there is some magic going on here. I’m not claiming to be able to give complete answers to your questions, but can suggest some resources to explore further.
For starters, note that the picture they show of how the image is unrolled is actually not how we are doing it. If you dig into the details, the method of “flattening” that they give us unrolls the pixels such that the R, G and B pixels for a given position in the image are all adjacent in the array. Here’s a thread which digs into the details of that. If you read all the way through the thread, it later discusses how you could do the “unrolling” in the other way (all the red pixels first, followed by all the blue pixels and so forth). And the interesting thing is that the algorithm can still learn to detect the patterns in the images with either style of unrolling. But it is crucial that you are consistent in the method you use: if you mix the types of unrolling, then you get garbage and nothing works. You can actually run the experiments and prove to yourself that it works equally either way as long as you are consistent.
It does seem surprising and counterintuitive that the unrolling does not destroy the algorithm’s ability to learn to recognize the geometric patterns in the images. At one level, we just have to believe it from the results, but it is worth thinking about whether you could construct some experiments to try to figure out what is happening in the internal layers of the network. In Course 4 of DLS, which covers Convolutional Neural Networks, Prof Ng will show us some really interesting work where researchers did exactly that. The lecture is called “What are Deep ConvNets Learning?” and the video is available on YouTube. (It’s in Week 4 of DLS Course 4.) Even if you haven’t yet learned about ConvNets, you will get the idea of what he’s describing and some intuition from that lecture. Of course ConvNets are more powerful than the networks we are learning about here in Course 1 because they actually can deal with images in their original spatial form and work by stepping smaller “filters” across and down the images. So with ConvNets, it’s a bit more intuitive to see why it can detect the same pattern at any position in an image.
But even without pre-knowledge of ConvNets, the ideas in the lecture about how you could instrument the neurons in the hidden layers of a network and then feed images through and get some idea of which patterns trigger the largest response from a given neuron are interesting. You can think about how that could applied to the simpler fully connected networks we are studying here. I have not done any searching to find out if there are any papers about that.
1 Like
Not sure if this is a conventional or expert-approved concept, but when I think of neural nets learning, I focus on the loss function and optimizer and what they do to the parameters. Because the bottom line, at least they way I think about it, is that neural nets are NOT learning strokes and lines, nor ears and legs and tails. They are merely minimizing mathematical loss and iteratively adjusting their input parameters in order to do so.
Now it turns out that after parameters are trained, you can examine what patches of the input signal activated a particular node. And those pieces of the input look like edges and lines at the front of the network, and increasingly like pieces of things we recognize the deeper you go, until at the end, what activates a network to output a 1 from a cat classifier is, wait for it, an image of a cat. So it looks to us like the network ‘learned’ to see a cat, and we can use it as if that were true. But I don’t think it really is. Hope this doesn’t make things worse, and I know I didn’t address the math of the matrix flattening etc.
Welcome thoughts and guidance if I missed the mark.