Do I understand it correctly that, theoretically, both Precision and Recall can be 100% at the same time? This can be the case when we got a perfect algorithm that produces no False Positives or False Negatives. The graph in the video “Trading off precision and recall” at 5:38, however, does not seem to allow for such a scenario!?

Yes. It’s just extremely unlikely.

If you plug 100% into this equation for both P and R, you get an F1 score of 1.0.

Hello @Robert_Ascan , a graph like that represents the performance of * a* model.

*model does not allow such scenario and it is not a perfect model, as you said, that can produce no False Postiives or False Negatives regardless of the threshold used. The graph is a typical one but not universally true.*

**That**