Course 1 Week 1 Gradient Descent choosing w and b

Looking at the gradient descent lab, where in the code is the optimal w and b selected? I see that you iterate through the number of iterations and recalculate updated w and b in the gradient descent function. However, I’m not sure how the code selects the final w_final and b_final values.

Hello @learner_2022

As you update w and b it becomes the latest value of w and b. When you exit the function, whatever was the final value of w and b gets returned.

In addition to mentor @shanup reply, the code in this assignment does not check that the minimum cost has been found. That’s a more advanced technique, and there are better methods for this later in the course.

For now, you’d have to experiment to see whether increasing the learning rate or iterations will give a final lower cost.

thanks for clarifying! really appreciate it :grinning:

thanks for the quick response!