Prioritizing Precision or Recall

Hi everyone :smile:.
I am enjoying the course a lot and I thought I should make a contribution with regard to the scenarios discussed in the notebook on weighing false positives and false negatives to determine how they affect results in specific scenarios.

In the cancer diagnosis case, I believe another way it could be interpreted is a False Negative is more expensive than False Positive. This is the case with some cancers that are of a high grade and as such the longer they are missed (i.e. a False Negative), the more the chances of morbidity and eventual death. In that instance, I believe a predictive model would be preferred to have fewer numbers of False Negatives i.e. higher Recall., to avoid the risk of a patient dying due to being falsely misdiagnosed.

I’m glad this course is helping learners think through the problems and solutions in a way that solidifies the concepts in a generic manner. This understanding can then be used in tackling different problems with the same approach.

Thank you AI4M team!

2 Likes

Thank you @Veneratiovitae for enjoying our courses and making contributions in our Community.

1 Like

So often we must decide what is more expensive , a false positive or a false negative . If the later is more expensive, then we should aim to improve the precision , otherwise, we must consider the recall .

This is a mistake, I think. Is it not? I found it here: C1_W2_Lab_1_roc_curve _and_threshold
in the markdown cell just above the discussion of ROC Curve

Hi @Dipesh_Timsina

Precision, highlighting the true positives and minimizing false positives , in contrast to the recall, which captures all positive instances and minimizing false negatives. The choice depends on the case’s specific needs and the cost of errors.

Regards
DP