Neural Networks: How do we calculate parameters w and b of the network?

Hello, I have finished Week1 and Week2 of course Advanced Learning Algorithm and it is very confusing, throut the whole course we were learning about neural networks and we were using given parameters W and b to calculate predication, we learnt what is a cost function and loss function in NN, but no one explained how do we train it? How do implement Gradient Descent for neural network? There is a last video for back propagation, but it uses given parameters W and b, as we did through the course. Now I went to Week3 and the topic of how to Apply machine Learning! How can I apply NN if I don’t know how are they being trained??? You could have provided at least one more video with let’s 3 neurons examples of Gradient Descent implementation for training…
Very disappointing!!!
Is there any course on Coursera where they explain this in detail and not just by using Tensorflow? Does someone know?

The MLS course is a beginner-level presentation.

The Deep Leaning Specialization (Intermediate level) definitely goes into how to write code to train a NN. But it also uses tools like sklearn and TensorFlow, which handle the training process for you.

I’ll review the MLS course lectures and see if you have missed anything important on this regard, and will reply again later.

Thank you! I will look into that course