Hi,
If I consider problems like Image recognition or NLP, a human can recognize the pattern and we train an algorithm to recognize the said pattern. For example: image recognition, a human can do it too.
So, can an algorithm spot a pattern in a data set that a human cannot?
Thank you.
Yes it can be possible, it depends on the depth the algorithm can penetrate to, the human eye has certain physical depth and judgement is based on prior memory. It also depends how you train the ML network but that network can generalise in a different way than a human, it can come up with rules that are more complex than a human can spot and also depends on the kind of training the ML network goes through.
1 Like
I agree that the short answer to the OP question is “Yes”, and agree that it depends on the training in complex ways. By that I mean the training examples, the choice of optimization function, and the objective or loss function. Given these, the algorithm will learn whatever it is that satisfies its constraints, irrespective of whether that is precisely what the designers intended.
The only exception I might make with the response is the use of the word rules here. I think it is ok if taken figuratively, but @Ganeshkumar_Naik should realize the rules being learned are numerical parameters (the weights and maybe biases) not human readable if-then-else kinds of things.
1 Like
Yeah by rules i mean numerical transformations, mappings, features etc. the mathematical model is based on and can learn.
1 Like
Yes, there is at least one famous case in which an algorithm can “see” something in an image that even highly trained humans cannot. Opthalmologists had always believed that you can’t identify the sex of a patient by examining retinal scans, but an algorithm was trained that can do it. I have not followed up on this case to see if they went back and were able to figure out how the algorithm accomplished this. I.e. to identify what it is that humans are missing when they examine the images.
So this is a concrete example in which Bayes Error < Human Error on an image analysis task.
2 Likes
That’s a good example @paulinpaloalto . My former employer built machine learning algorithms that helped pharmaceutical companies mine their chemical databases to look for drugs that had already gone through the US FDA approval process for one use, but find other uses for them. This allowed them to drastically shorten the time and reduce the cost of bringing new medicines to market. The patterns were too complex for humans to uncover either on their own or with traditional drug discovery analytics. Not all of that research was published since it provided competitive advantage worth big buck$ to the customer. In this application it was not image data, but structured information about chemicals and the biological pathways they can influence.
Yes, that’s a great example of a case in which algorithms can solve a problem that’s just too complicated to solve “by hand”. There are plenty of those. And there are cases like Deep Blue and AlphaGo which can play very complex games better than we can. Another really epic complex problem solving example is the Google AlphaFold project which has apparently solved some protein folding problems that previously had no solution.
But I guess it seems more plausible that an algorithm can handle more complex tasks than we can or maybe it’s just that we’ve seen so many examples by now, but somehow the vision example seems more surprising. What is the algorithm “seeing” that the opthalmologists are missing?