Purpose of Gradient descent? (asked by a rookie)

What is the purpose of using gradient descent when we can directly plot a line on the graph where there is the global minimum, in the linear regression model?

Well in a very simple function it is possible to use plotting but a high dimensional neural network plotting is impossible to do, hence you use iterative descending calculations to find an optimal minima.

2 Likes

Thank you for clarifying it, sir.

1 Like

In addition to @gent.spah great answer:

  • side note: in linear regression the optimum can be calculated analytically with the normal equation which works particularly well if the number of features is not too large and the data set is not super big. Other wise (super many features + really big data): gradient descent can be superior due to its iterative optimization approach where no matrix inversion step [cubic complexity] is needed in contrast to the analytical solution w/ normal equation.
  • in general: in very complex tasks in the optimization problem you cannot just plot the costs (on the one hand because it’s usually multi-dimensional as @gent.spah‘s stated correctly), but also because these costs are not so easy and simple to compute in general. Actually we use gradient descent to make our next step within the optimization literally in this direction where we expect that the global optimum is and then carefully check again (over and over)…

These threads can be interesting for you, too:

Hope that helps!

Best regards
Christian

1 Like