I ran into a rather basic question - how will neural networks be able to spot patterns within individual samples in the training set? Does it take some synthetic features to guide it to do that?
For a very simple example, lets say we have two inputs n- x1 and x2, and the output y is “1” if they are the same, and “0” if they are not.
Wouldn’t the NN fail miserably recognizing this? Wouldn’t the training be limited to patterns of what x1 and x2 are set to? e.g., (1,1), (2,2), (3,3),(10,10), (11,11), (12,12), (1,5), (2,5), (3,5), etc.? That is the neural network won’t be able to recognize the simple fact that the output is “1” if the values are the same - and may well fail to predict if we pass it (100,100). Or for that matter if we even pass an intermediate value that the training set hasn’t encountered - say (5,5)?
So what’s the solution to this problem? I suppose i can create a ‘synthetic’ 3rd feature that’s true if x1 == x2, and false if x1 != x2. Is that what is normally done? The problem in that case, however, is there would become a combinatorial explosion of possible equalities in larger sets of features. Right? Also, what if we want to do other comparisions - x1 < x2, x1 < x2, etc. - that’s another large set of combinations that have to be enumerated into new synthesized featrues?
Are there other approaches to this problem?
Thanks!