In week 1 is saying that gradient descent can converge. But I want a more mathematical demonstration of that. Even for more iterations does this gradient converge? Or is it only a computational limit ?

If we could demonstrate this would possibly find an optimal value for alpha?

There is more material on this later in this course.

2 Likes

Hi @brunovaz !

I don’t know your level of math but this book Convex Optimization – Boyd and Vandenberghe

Chapter 9.3 Gradient descent method has an analysis of it’s convergence.

And if you want to know more about Convex Optimization check this course Convex Optimization

1 Like

Thanks ! This was very helpfull. I will see and try to understand.