I assume you are talking about the lecture “What are Deep Convnets Learning?” in Week 4.
-
No, this material does not really figure in any of the exercises. Maybe the closest we would get is the use of the outputs of selected internal layers in the Art Generation exercise.
-
The image patches are from whatever the input data set is. It doesn’t really matter if it’s training or test data as far as this is concerned. The question is what patterns in an input image trigger the largest response from some particular neuron in the hidden layers of the network. The work that Prof Ng is describing here involved putting instrumentation on individual neurons in the hidden layers. Then you feed a bunch of images through and track the values being generated by that neuron. If the new value on the current image patch is greater than any you’ve seen before or is in the new “top 9” output values, then you record (save) that image patch.
-
Everything here is talking about the output of one particular neuron. But you could apply the idea to a number of different neurons in parallel at the same time, if you built the instrumentation flexibly enough to support that.
If you want to know more about this, perhaps it’s worth reading the actual research paper that Prof Ng is describing to us here. The link is given in the course materials.