Adjusting weights and biases in ReLU lab

In the ReLU lab in week 2, we are required to find out the values of w1, b1, w2, b2 such that it fits that target plot. Do we adjust the values randomly(using intuition) or we have any specific mathematical procedure to do so?
If there is a mathematical procedure, can anyone explain it?

No mathematical procedure is required to set the values of the parameters in the plot. Just notice where segments of each unit start and adjust the values accordingly.

what is the use of non linear behaviour of ReLU activation. I did not get the main motive of it

Non-linear behavior of ReLU helps neural networks in the hidden layers explore complex decision boundaries that are not linearly separable or are difficult to learn.

If you examine its formula , it chops off negative values while preserving positive values. This introduces nonlinearity.

1 Like

The non-linear behaviour of ReLU helps to piece together the outputs from various neurons by controlling when each neuron is active and when it is inactive.

1 Like