In the example of predicting house price, we were doing manual feature engineering where we had to look at the features width and depth and decide by hand how to combine them together to construct a more complex feature, width times depth, which was the size of the lawn.So, how the neural network engineer the feature, width times depth?
What NN does is to best map training samples’ input features to the corresponding output labels, so there is no guarantee that a trained NN can get width times depth as one of its neurons’ outputs, and indeed it is very unlikely if not impossible. Therefore, we do need good, manually engineered features to help our model perform better for the problem in the example.
The NN automatically generates many new non-linear combinations of the input features. This is the key to why the hidden layer and its activation function in a NN are so important.
This can reduce the need for doing your own feature engineering.