#Week3 - Skewed datasets - prevision/recall metrics

#Week3 - Skewed datasets (optional)

Video on error metrics for skewed datasets - link: Error metrics for skewed datasets | Coursera

Hi,
I have a question about prevision/recall metrics, please.
In the video, the case of a algorithm who always predicts y=0 is a bad model. It say that the recall metrics will be:


Recall metrics will be: 0/0 = (True positive = 0)/(number of actual positives = 0). But for me, the number of actual positive can be > 0, because our “bad” algorithm can have false negative ? For me 0/0 is the precision metrics and the recall metrics is 0/number…

Why it is the recall metrics that helps The recall metric in particular helps you detect if the learning algorithm is predicting zero all the time and not the prevision metrics, please ?

Thank you very much,
Sao Mai

1 Like

Hello @Sao-Mai,

I think we should understand it like in the right hand side, not the left. If you redo the thinking in this way, could you clear up the rest of your questions?

Cheers,
Raymond

Oh, I see :sweat_smile: :smiling_face:! Thank you so much for your clear explanation :smiling_face: !

1 Like