Week3 precision and recall

Here the professor says “The recall metric in particular helps you detect if the learning algorithm is predicting zero all the time. Because if your learning algorithm just prints y equals 0, then the number of true positives will be zero because it never predicts positive, and so the recall will be equal to zero divided by the number of actual positives, which is equal to zero.”

Why recall in particular helps to detect if the learning algo is predicting 0 all the time. Isn’t precision also helpful to detect that?

Yes, I think you are correct.

This gets a little complicated, I’ll see if I can work this out:

The dataset has 25 “true” examples, and 75 “negative” examples.

Say we have two models, one that always predicts True, and one that always predicts False.

For the “always True” model:

  • the Precision is 25 / (25 + 75) = 0.25.
  • the Recall is 25 / (25 + 0) = 1.0.

For the “always False” model:

  • the Precision is 0 / (0 + 0) = undefined
  • the Recall is 0 / (0 + 25) = 0.

Perhaps it would be more correct to say that if the model always predicts False, then the Precision cannot be calculated.

3 Likes