I do not see, how the encoder visualization layer will help us understand anything?
The weights of this layer are random initialized I guess, and they won’t be updated during training, so I think this is just a random image we get from this layer?
Why do you feel the weights are not useful? They are the entire purpose of training.
Yes, but I do not think, these weights will ever receive an update, since they have no gradients?
They are not connected to the output.
It seems unlikely the model would include used weights that cannot be trained and are randomly initialized. So that’s probably not what is happening.
Perhaps a mentor for fhis course can say more.
The encoder visualization will give out a mapping of an image (or can say matrix) passing through conv2d convolutions. The conv2d has activations, weights that are applied to the image, so for a particular image that mapping is different than from others.
There is no loss function on the visualization its just there to provide a visualization.
Yes, but my question is, what is the value of this visualization? It looks like an arbitrary convolution with random weights applied.
Can we learn anything from this “visualization”?
You maybe able to see some patterns in the trained weights.
It is only to give an intuition about the process.