All right this may be silly. But in the lectures on edge detection prof. Ng repeatedly says that it makes sense to refer to the result of a convolution as an image (e.h, at 6:55 in this lecture). Yet multiple examples end up having negative values for the pixel. How should we interpret those?
Please update your post with link to the lecture and the timestamp.
The input to the first conv layer is frequently the pixels of an image. But the output of the conv layer is just numbers that don’t map to “colors” anymore. I think he’s just speaking loosely here. The output has a similar geometric shape, but with fewer values. You could probably get an image rendering function to render it, but you might have to renormalize the values to the range [0,1] or integers from 0 - 255 in order for the rendering to work.
BTW your link just points to the home page of Course 3. I assume you really meant this Edge Detection Example lecture.
1 Like
Thank you @paulinpaloalto, that makes sense. (Sorry, I was trying to link from my mobile).