I have managed to complete week 2 but I am totally confused.
Why do we need to compute the gradient? The simple version would be
While loop →
Step A : Calculate cost for starting w and b value.
Step B : Calculate cost for starting w+alpha and b+alpha value.
Compare output between Step A and Step B and then decide whether to continue processing
Can someone please explain why gradient decent is needed? I see that we are multiplying alpha with gradiant decent and then deducting from w and b like below
It is a good question. I just finished the first course and can’t stop appreciating the idea behind gradient descent. Simple yet so powerful I feel!
@TMosh nailed it, just to add to that just with the learning rate alpha you might overshoot (when alpha is too big) or take a very long time (when alpha is too small) to get to the lowest point (to converge). Because you are always deducting a constant from the parameters, but when you multiply the derivative it helps you with ‘rate of change’ (the size of step) that the algo needs to take when nearing the minimum.
You might have seen the size of steps get smaller as it approaches the minimum in the video and the optional labs. Attaching a screenshot as well.
On top of @darshN’s excellent example, the derivative also takes care of the direction. In the graph shared by @darshN, we see that b increased at first and then it decreased. This is not possible with just “minus alpha”.
Similarly, with just “minus alpha”, we expect w and b to only move in one direction, but gradient decent is supposed to work regardless of where w and b started off, so it can’t be a fixed direction, and the derivative saves us on that.
For the beginners like me if you are wondering what @TMosh is saying then…
took me quite some time to understand this intuitively and it is quite simple. any function mx+b or ax2+bx+c or anything else is straight or curved line and you want to select a point on the line to calculate the cost. if you do not calculate the gradient then the cost will calculated at points away and away from the line. (Which is what happens when alpha is too big).
I am not sure how to intuitively understand that in case of multiple variables but I am going to keep on thinking in the same direction.
You need to calculate cost at each point in the equation. For that to happen, you need to start at some point and then move up or down. I still dont know how intial starting point is calculated but once you have it. The way you do it is ititial starting point + (alpha * slope). Alpha*slope to make sure that you are staying on line while walking up or down. Imagine this visually and you will get it.
Thanks for sharing, @Ryan_A, and it is a good way to understand it! We are setting it up as an “optimization problem” and the optimization is achieved by minimizing the cost.