Feature Engineering in Neural Networks!

While explaining about Neural Networks in one of the classes Andrew sir mentions that
”””
A remarkable thing about the neural network is you can learn these feature detectors at the different hidden layers all by itself. In this example, no one ever told it to look for short little edges in the first layer, and eyes and noses and face parts in the second layer and then more complete face shapes at the third layer.
”””

I’ve few doubts and curious about few points on this nature of Neural Networks:

  1. Are there any scenarios when we will have to do manual feature engineering while building a Neural Network based models? I’m curious to know about any examples or usecase?
  2. In the same video few slides shown how the neurons learns features on its own for an image recognition usecase, well for images it is possible to identify these features learnt by the model through some visualization, I guess it is not same with other types of data like text and is this what really makes a Neural Networks BLACK BOX? :slight_smile:

If by manual feature engineering you mean deciding on what each layer is meant to focus on, the answer is no! If you mean creating training examples for the model to train on, the answer is yes.

For the second, yes, for other types of neural networks, it is not easy to visualize, but perhaps matrix position may be able to help in transformers, but generally speaking, it is not easy to do that right.

If you are referring to creating new features through combinations of existing features (i.e. like “polynomial regression”), no you don’t need to do that with a neural network.

The non-linear activation function in the hidden layer automatically takes care of creating non-linear relationships between the features.

1 Like