Gradient descent in neural network

In week 1, we’re using Tensorflow and numpy to implement simple neural networks.

When using Tensorflow, I’m assuming that Tensorflow does a lot “magic” under the hood to compute the weights and biases.

When the layers are manually implemented using numpy, we’re not computing the cost function and minimizing it for each neuron in each layer using gradient descent, why?

1 Like

Hello @fouad,

I think it is just not the goal for the Numpy lab to train a model. Training a multi-layer neural network from scratch (without Tensorflow) is not covered in this specialization, but in the Deep Learning Specialization.

Cheers,
Raymond

1 Like

What about in the CoffeeRoastingNumPy, is there supposed to be Gradient descent but we are plugging in optimal w & b values so the code doesn’t show that? More of a query to make sure my understanding is correct.

Assuming for a real NN, we would also have it do gradient descent and tell us the optimal w & b values

Hi @Sahir_Karani,

No, that lab does not mean to demo any model training, but, as you said, only to apply trained weights.

For a real NN, yes, we would have to do gradient descent and we usually do it with Tensorflow (not numpy).

Cheers,
Raymond