Isn’t ReLU also a linear activation function? Then should we use it when all it will do is linearize the input values?
Welcome to the course!
ReLU is not a linear activation function, it is surely a piece-wise linear, but that’s non linear.
It helps us learn non-linear decision boundaries. The purpose of a non linear activation is to allow your neural network to learn non-linear/complex functions as you will learn further. Hence, you won’t use ReLU when all you want to do is linearize input values.
I recommend you to check this post out:
Why is ReLU non-linear?. ReLU doesn’t look very non-linear, but… | by Maxim Lopin | Medium
Also, for the future, do search your query up and ensure it hasn’t been asked before to avoid unnecessary repetition.
2 Likes