How Bias Neuron is linear?

I read somewhere that Bias Neuron is linear. How can it be linear, if we are applying non-linear activation function to the bias neuron as well.
like sigma(wx + b)

Need clarification on this.

1 Like

At each layer of a feed forward neural network, there are two steps:

  1. The linear (well really “affine”) transformation W \cdot X + b.
  2. The non-linear activation function which is applied to the output of step 1.

So b is part of the linear step, but then the non-linear activation is applied. So the complete function of each layer is non-linear: it is the composition of a linear function and a non-linear function.

If that doesn’t answer your question, then please provide a reference to the original statement that you are referencing in your question. Where did you see the statement that “the bias is linear”?


Actually it was from my quiz in the class. But i’m still confused on this a lot.

1 Like

For the second question, I explained why the Bias Neuron is part of the “linear” step in the two step “layer process” above.

The first question I don’t understand either. The Bias value gets added in the linear step and then that output value is fed through whatever the activation function is for the layer in question. To know what that function is, you’d need more context. I’ll go take a look at the C1 W1 quiz again and let you know if I can come up with any better explanation than that.

1 Like

I just took the DLS C1 W1 quiz the 3 times that is allowed within 8 hours (or whatever the time limit is) and I did not see either of the questions you listed. Are you sure you’re not referring to some other course?

Update: I checked DLS C1 W2 quiz and also did not see anything vaguely resembling your questions, but I only tried that one once.

1 Like

I don’t recall “bias neuron” as a concept used in the DLS courses.