Why isn't the gradient descent for logistic regression not implemented in the implementation of neural network from scratch using numpy?

I assumed that whenever we used the Tensorflow implementation the backpropagation algorithm was taken care of. But while implementing using numpy, there was no mention of the values of parameters being optimized using gradient descent. How is the prediction in this lab correct?

Hi @Arisha_Prasain

As in this optional lab we We copy trained weights and biases from the previous lab in Tensorflow.so it just making layers and predict without using gradient descent (without change any weights and biases )
please feel free to ask ant questions,