Confused on `C2_W2_Relu`

In the C2_W2_Relu lab I’m confused on what is shown in the individual neuron graphs on the right and how that matches up to the piece-wise linear function in the graph on the left:

  1. Is Unit 1 handling x values from [1, …] and Unit 2 handling x values from [2,…]? In the graph on the left, it looks like the second segment corresponds to inputs from [1,2] and the third second corresponds to inputs from [2, 3]; but in the individual unit graph for Unit 1, Relu is positive from [1,3]
  2. I also didn’t quite understand this sentence:

"The slope of the unit, 𝑤[1]2, must be set so that the sum of unit 1 and 2 have the desired slope. The bias is again adjusted to keep the output negative until x has reached 2. "

Yes!

That’s because, for the overall output, the segment of Unit 1 is only considered up to the point where the segment of Unit 2 starts. This ensures that combining the output of Unit 1 and Unit 2, we achieve the desired slope, regardless of whether Unit 1 contributes from the range of [1,3] or from the range of [1, ∞], for example. Therefore, the contribution of Unit 1 is somewhat limited.

The parameter w determines the ‘steepness’ or ‘slope’ of the line, while the parameter b ‘shifts’ the line up or down along the vertical axis. Adjusting the bias term is done to ensure that the output remains negative until a certain point. You can try adjusting these parameters and observe the effects for yourself.

1 Like