Human error in the training set

In computer vision task, if the training set was labeled by humans, there can be a error in the labels as well. In a such scenario is it possible to surpass the human-level performance?

1 Like

No. I think machines can never surpass human-level performance. From the human level, I mean a group of experts, not individuals. Though machines can be fast.

1 Like

It’s an interesting question whether the training can do better than the quality of the labels on the data. Intuitively it should not be possible to perform better than the quality of the training data. So in that scenario the “real” human error of a team of experts, as Saif says, would be better than the quality of the training data and thus the model could not exceeed human performance. So the relationship would be:

human error < training set error <= model error

But on the general question of whether a model can ever exceed human performance, that’s a bit more subtle. On computer vision tasks, the best human performance is pretty hard to beat if you are using the “team of experts” model. But there is at least one famous case of the Google Ai model that learned to determine the sex of the patient from retinal scans, which opthalmologists had previously believed was impossible.

But Prof Ng discusses the general point in Course 3 (I forget which lecture) and makes the point that there are some tasks that humans just aren’t that good at. The particular example I remember him giving was recommender systems for showing people products they might find interesting on a website. It turns out that algorithms are actually way better at that than humans. :laughing:


Thank you for the explanation.

1 Like

Thank you for the explanation .

1 Like