Neural networks and dealing with ratios of inputs

I found the courses very helpful. But I have a conceptual question. Typically, if there is input A and input B, and lets say we are modeling a neural network to capture a function that outputs some linear/polynomial function (say the output is Input A^2+ 5Input B + Input AInputB), we know that given sufficient training, the neural network will do a very good job at finding the relation between the Inputs and Outputs.

How about when the output is a function of the ratio of the inputs, i.e. Output = (InputA/Input B). How would the neural network capture that relationship? Are you saying it would use the Taylor’s expansion and find coefficients for that?

The network does not use Taylor Expansions. It uses the combination of linear (affine) transformations followed by non-linear activation functions stacked in layers to construct a complex function. We can then use back propagation to learn coefficients for that function that allow it to approximate the patterns that are present in the training data. Many different patterns can potentially be learned. Of course you need to retrain if you change the data.

Hello @Akshay_N

Division is not trivial.

Just think about the easier 1/x, how would you use ReLUs to resemble a curve like 1/x, especially around x=0?

However, log can change division into subtraction, which may be a useful workaround, but you will need to think about how to use it :wink: .