Question about optimizers

So while doing the the first course of the DLS, i can see the implementation (mostly for the backward propagintion) is different than how you do it for normal logistic regression in the ML course (which makes sense).

But that made me wonder, if i should consider optimizers for normal logistic regression and deep learning as 2 similar concepts that is a bit different.

To give an example of what i mean. Lets say i have made the optimizers as their own class, to be used as general components. So should think of them as general compontens that can be shared or do an implementation for both regular ML and DL.

Hi there,

I think it makes total sense to think of optimizers as a supporting tool that work particularly well combined with certain models that should be used as a tool to solve a problem:

I believe this thread could be interesting for you since also advanced optimization methods are touched upon:

Hope that helps. Have a great day, @FrederikH!

Best regards

1 Like

Hello @FrederikH,

I think both DLS C1 and MLS use the same optimizer, which is the vanilla gradient descent, or

w := w - \alpha \frac{\partial{J}}{\partial{w}}