Per my understanding J(w,b) is squared error, It shouldn’t be possible to have J(w,b) with negative value since it’s squared value, is the photo wrong?

please correct me if I’m wrong, thanks1

Per my understanding J(w,b) is squared error, It shouldn’t be possible to have J(w,b) with negative value since it’s squared value, is the photo wrong?

please correct me if I’m wrong, thanks1

That’s a mistake in the slide.

Very good observation, @Fatomk11295!

However, we need your help to tell us where it was (in which video and at what time mark) so that we can go to see what you saw.

Now, I am going to make guesses.

If the lecture was talking about a linear regression setting so that it has only one w and one b, and it was using squared error, then the cost surface shouldn’t look like this, instead, it should be a paraboloid. Now that it looks completely different from a paraboloid, I would guess the lecture was demonstrating a general surface that gradient descent can see, without confining it to a linear regression and using squared error.

But anyway, you have made a good observation!

Cheers,

Raymond

Using that slide is one of Andrew’s traditions - it’s been in all his ML courses for at least 10 years.

Thanks for giving a deep insight and clear explanation, it seems that’s the case where this image talks about cost functions in general, however that’s not the case for linear regression cost function.

Regarding which lecture its week 1 with title:“Gradient descent for linear regression”.

No cost function can give a negative value - since the lowest value for cost is zero.

The scale on the slide is simply wrong.

Ideally and logically, that should be true, but I can’t guarantee it since I’m still a newbie