Why do we need an activation function? | ReLU activation

For positive values, the ReLU function is identical to Linear Function.
Then, in cases where our inputs are all positive features, how can a Neural Network provide non-linear decision boundaries?

If you normalize the features, you will have both positive and negative values.

1 Like

@ankitprashnani

The Neural network can provide non_linear decision boundaries by using the combination of outputs from the neurons of every hidden layer

1 Like

Some of your weights are going to be negatives, though it depends on how you initialize your weights. Even if all your initial weights and features are positive, after some rounds of gradient descents, some weights might still become negative. Once some weights are negative, it is possible to trigger the ReLU.

Raymond

1 Like

Hi, @ankitprashnani!

Check this post about activation functions.

1 Like