Polynomial Feature as Hidden Unit Neural Network

On a video, they tell us that W (weight) of a layer in neural network is number_of_feature x number_of_unit. For example, if we have 2 layer (Layer 1 with 3 unit, Layer 2 with 1 output unit), and we have 2 features, then the weight of Layer 1 is 2 x 3 = 6. That means we can’t apply polynomial calculation inside a unit.

My question is, can we apply polynomial calculation inside a unit of NN, should or shouldn’t we do that and why? If you mind, give me your explanation why we should or shouldn’t to do that thing.

Hi @malvinpatrick
Welcome to the community!

In the neural network, we used more complex function than the polynomial functions by using nonlinear activation function which doing more complex calculation than the polynomial functions to extract the information

Best Regards,

Ohhh okey. In the video of neural network, output layer can be set to “linear” activation function. That means, can I use neural network for complex predictive analysis instead using linear regression?

Hi @malvinpatrick

Yes, exactly! You can use neural networks (NNs) to solve highly nonlinear problems.
In week three, you will take a look at Transfer learning: using data from a different task which also illustrates this characteristics of NNs to model complex patterns (benefitting from prior knowledge which was incorporated in the pre-training).

Also: regarding why nonlinearity can be described well, these threads might be worth a look:

In conclusion: especially NNs w/ advanced architectures are very capable of learning very abstract and complex patterns in a highly scalable way, in particular when it comes to unstructured big data with tons of labels.

Hope that helps!

Best regards