In this particular screenshot from coursera course supervised learning… why did they take y(i) as 1 instead it is 0.5 …how did they consider 1 as trainining set …in fact we are getting y(i) as 1 when we take w as 1…

can any1 explain me??

In this particular screenshot from coursera course supervised learning… why did they take y(i) as 1 instead it is 0.5 …how did they consider 1 as trainining set …in fact we are getting y(i) as 1 when we take w as 1…

can any1 explain me??

1 Like

The x and y values are given. The task is to learn the best w and b values to fit that data set.

1 Like

y(i)s are the actual targets that we have already (shown with red cross on plot).

So the first one (y(0)) is 1 and the predicted one is 0.5.

Hope that helped.

1 Like

See in this particular example, as per the training set x(i),y(i) are given to us as (1,1),(2,2) and so on. Since we have to fit a line for these points we are testing out different values of the parameter w to best fir the training samples. The graph on the left shows the line if we fir the curve taking w=0.5. The graph on the right shows that the cost J(w) is minimum at w=1. So we will eventually reach this value by applying gradient descent on w. Hope this solves your doubt!

1 Like

y is the target which means the actual result. It has nothing to do with 0.5, it can either be 0 or 1. 0.5 in this instance is the threshold of the model. If our model predicts anything that is >= 0.5, we give our predicted value (y hat) 1 anything less than this will be 0.

1 Like

The purple line is the model that we are seeking to find the best algorithm. 0.5, as you mentioned, comes as a prediction output that will be measure its difference against y(i). The case is same for two inputs, as the vertical distance grows bigger as their predictions of x(2) and x(3) getting larger.

1 Like