In a layer, is it necessary to use the same activation functions?

in a layer, is it necessary to use the same activation functions?

All units in a layer will use the same activation.

Hello @Yiming_Sha,

Tensorflow only allows you to set one activation function for each layer. If you really want to experiment with more than one, an alternative is to “concatenate” two layers into one, where each layer uses a different activation function.

Cheers,
Raymond

1 Like

Thanks, for what the purpose we use different activation functions?

If you want to experiment it, then you can do it. There is no rule as to when to use which activation, except justifying your choice with experiment results. This is not about what other people tell you to do or not to do, but about what you want to try and how you measure the outcome.

Raymond

You will learn this as you progress through the course.

Hi @Yiming_Sha,

As I know, in most cases, there is no need to use different activation functions in a layer.
Because for example, Sigmoid and ReLU both have a similar result but ReLU is computationally faster. So why not using it for all the hidden units.

Cheers,
Amir

ReLU gives no output for negative input values, so it’s very inefficient in training and suffers from “dead node” problems.

So you need a lot more ReLU units to do the same job that a sigmoid unit could do.