If ReLU is used in hidden layers, the output of that layer will always be positive. How is that correct for regression models?
If hidden layers output only 0 and positive numbers then how will final output layer whose input is only positive(or 0) give correct answer?
Hey @Aishwarya_Mundley,
A quick response is, you can have negative weights in the output layer to make the output negative.
The incoming activation values from the last hidden layer are non-negatives, but the weights in the output layer can be negative.
I have to go now. If you have any follow-up, I am sure other mentors who have time can answer you
Cheers,
Raymond
As Raymond explained, the last hidden layers’ output (which is 0 or positive) will feed to the output layer and the output layer then can give out the negative number as well (because of different activation function).
Best,
Saif.
1 Like
@Aishwarya_Mundley, That is the reason why we should generally avoid using ReLU in the output layer in certain cases.