Regarding Gradient Descent Function

Hi Everyone, this is regarding MLS C1 W1.
I’m a newbie to Machine Learning. So, please bear with me if my query sounds dumb.
So, my question is that how is the gradient descent function while returning the w_final and b_final values returning the minimum values? Like we are running the loop for num_iters times (10000 in the example) and there’s no exit condition mentioned, so how’s the function returning the minimum values? Am I missing something here?

In this simple first exercise, there’s no checking that we’ve reached the minimum cost. It’s actually not very easy to do that in a simple way.

So we just run gradient descent for a fixed number of iterations. The number of iterations can be optimized by experimentation.

There are better methods ahead in later assignments.

@TMosh Oh okay got it!! Thanks

In addition to what @TMosh said,

in practice we use some methods to detect the minimal weights. One of these is used in TensorFlow which is called the callback function, where the model stops training after reaching a pre-determined accuracy. And there are so much more ways to do it.

BTW,

All your questions are warmly welcomed and there is no way your questions could be considered dumb, so please feel free to ask whatever comes to your mind.

thanks @Osama_Saad_Farouk !!

oh, I see got it. That’s what I was wondering were there any methods. Anyways thanks

1 Like