I assumed that whenever we used the Tensorflow implementation the backpropagation algorithm was taken care of. But while implementing using numpy, there was no mention of the values of parameters being optimized using gradient descent. How is the prediction in this lab correct?
As in this optional lab we We copy trained weights and biases from the previous lab in Tensorflow.so it just making layers and predict without using gradient descent (without change any weights and biases )
please feel free to ask ant questions,
Thanks,
Abdelrahman
