Hi friend and mentor,
I already finished the coding homework, but to be honest, I did the code, doesn’t means I truly understood all. I have some questions as below, please directly answer yes or no firstly. thank you for your time.
from this post, i copied the pic as below
this output
also can be viewed in my local as below:
Q1. in the section of Excercise 1 - compute_content_cost , we have [-1] in a_C = content_output[-1] a_G = generated_output[-1] is because we need the last content layer (layer 5 in this case). yes or no?
Q2. in the section of Exercise 4 - compute_style_cost, we have
# Set a_S to be the hidden layer activation from the layer we have selected.
# The last element of the array contains the content layer image, which must not be used.
a_S = style_image_output[:-1]
# Set a_G to be the output of the choosen hidden layers.
# The last element of the list contains the content layer image which must not be used.
a_G = generated_image_output[:-1]
this [:-1] took all style layers (layer 0 to 4 at this case), but NOT last layer (this case is layer 5). Yes or no?
Q3. The true NN which is doing this Neural style transfer only has 6 layers in this case, like the black pic from my local above. Yes or No? in other words, we loaded the trained vgg (22 layers), but only 6 out of those are picked and used in this case.
Q4, during the training, we are NOT updating the parameters in the NN like before. The parameters in NN never changes anymore, but updating directly on the pixels, Yes or No ?
Q5, this is not a yes or no question, like the pic above, and the same thing was mentioned in the class: Prof said we pick kinda one middle layer for content right? but this 'block5_conv4'
… is very very end of the NN, right? I thought middle meant somewhere around block 3_conv4.
Thanks for your time again!