Measuring loss

Loss in Andrew’s course is computed by measuring the difference in vertical height between the ground truth target label and the algorithm’s prediction.

But wouldn’t it be faster and more accurate to measure’s perpendicular distance from the ground truth label to the algorithm’s prediction ?

Computing the perpendicular distance It is computationally more expensive, and the industry feels it does not provide enough benefit to be worth the extra CPU cycles.

Couldn’t it be done with less computational expense using an iterative approach as in how it is done currently with linear regression?

How would it be simpler than subtracting y_hat from y?

I doubt there would be benefit in doing a regression {edit: was ‘recursion’} for every individual example in the data set for every training epoch.

No, not simpler. But the ground truth label is likely to be closer to the initial prediction by measuring perpendicular distance than measuring vertical drop.

So perhaps fewer iterations or epochs as the prediction starts out closer to the label than the current linear regression estimate.

It would be interesting to perform an experiment and see how expensive loss computation is for computing perpendicular distance against vertical drop distance.

1 Like

I don’t think I said it would be simpler.

Also, what do you mean by “…recursion…”?

Sorry, I meant to write “regression”. I have edited my previous reply.