Week 1 Neural network layer of Neural network model of Advanced Learning Algoriths


Are neurons in hidden layers normally logistic regression neurons or not? In this video Andrew gives example of logistic regression neuron for the hidden layer. I am thinking neurons in hidden layers are normally not logistic regression. Like in the house price example square which equals to width * length might be an expected activiation of a hidden layer neron. The calculation here is just multiplication instead of logistic regression. For other activivations like affordability, market awareness the expected activiations might be a number from 1-100, they are not necessarily a probability between 0 to 1. Am I correct to understand the hidden layer this way?

The key issue is that the hidden layer activation must be a non-linear function. So it could be sigmoid, tanh, or ReLU.

Thank you Tmosh. Very helpful input.