Why we need GD algorithm if we can still optimise parameters of the model?

I think most of you will not agree to this, but I managed to optimise the parameters w/o the GD algorithm. Of course it was not easy and efficient as GD gives direction and some magnitude to approach minima of the function.

Is there any other reason apart from this or the use GD depends solely on this reason?

Just confirming

Congratulations on optimizing the parameters without using the Gradient Descent (GD) algorithm!
Gradient Descent is popular due to its efficiency, convergence, automatic updates, scalability, regularization, parallelism, and variants, making it a versatile optimization choice despite alternative methods.

2 Likes

Hello @tbhaxor ,
Could you share what your strategy was? Make sure to check if what you did wasn’t a specific case that works only in a particular situation.

I used random search. Also used some human feedback like taking input after some iterations. But that was TOOOOOOOO slow compared to GD :smile:

I wonder apart from efficiency (both in magnitude and direction) is there any other reason to use GD?

Gradient gives magnitude by which cost function changes at give weights, and Descent is the direction which tells always in opposite direction.

For example, there is also an algo known as Gradient Ascent

@aryan010204 How?

The gradient descent just tells where to move and scaled with Learning Rate gives how much, then we use “apply gradients” using w -= lr*grad_w

image

Hello @tbhaxor ,
Gradient Descent automatically adjusts model parameters by using gradient information to move in the direction of decreasing loss, reaching the optimal values over iterations.

Can you give example of this? In code how would you implement it?