Best output activation function for limited range cases

Hello,
In the course it was explained that if the output is non negative we could use ReLu instead of Linear to improve performance. Following up on that what about an output that is known to be bound in range e.g. 0-255. Or even with offset: 23.0-741.36. Is there a way to set an optimal output layer activation function?
Thanks

Hi @Riccardo_Micci ,

What Relu is a non-linear function that will output only positive inputs, any negative input value will be output as zero. Here is formula, f(x) = (Max, 0). It is simple and efficient.

Hi Kic,
Thanks for the quick reply. This doesn’t address my question though. I appreciate how the non linearity of ReLU can help on non negative problems. My question is if there is a why to exploit even further the knowledge of the output range.

You czn normalize the features first.

Hi Riccardo,

Thanks for the question and in that you also mentioned that ReLU is good for non-negative outputs which are what your two examples are about, and so, ReLU is possible for both examples. Having or not having an offset shouldn’t be problematic because your last layer has a trainable bias to take care of that.

On the other hand, activation functions like tanh and sigmoid won’t work because their ranges are limited.

Hi,

The ReLU function, because of its linear behavior, fits the bill for the various ranges that you mentioned.

The best part of the ReLU is that it has an open-ended linear region which means that it can be controlled to whatever range is dictated by the output…which is not to say that it will remain entirely within the bounds. Setting a threshold check on the output of the activation function, followed by a cap, will ensure it stays within the bounds at all times.

Hi,
Thanks. Do you mean something like this:
relu

Yes. Now you can set “Out max” as per your requirement to bound the output.