Approach when human-level performance not available


Setting the human-level performance as a benchmark seems to work well for problems which human beings are good at. How does this approach work for areas where, perhaps this doesn’t apply as well? For example, in product/movie recommendation algorithms.
How do folks working on such areas go about finding out avoidable bias, variance etc. ?

Thanks for your time.

@santoshprabhuk ,

in my understanding the human level performance can be used as a starting point for the avoidable bias. But as soon as ML models improve the performance of humans, the avoidable bias is moved with the best performing model. This is independent of tasks where we believe humans are good at or not. We have already seen several examples where we were wrong to believe humans are good and cannot be easily surpassed by an ML model… Alpha Go comes to my mind, but for sure there are hundreds of examples in medicine, security, IoT applications, etc.

Do you agree with me?

Yes @carloshvp agree with your view. I understand that the reference is not ‘absolute’ and keeps moving every time there is something better out there. Thank you for sharing you views.

1 Like