Some clarity on the Relu activation function turning off an input?

In the optional lab (ReLU activation) the following is mentioned

Unit 1 is responsible for the 2nd segment. Here the ReLU kept this unit quiet until after x is 1. Since the first unit is not contributing, the slope for unit 1, 𝑤[1]1, is just the slope of the target line. The bias must be adjusted to keep the output negative until x has reached 1. Note how the contribution of Unit 1 extends to the 3rd segment as well.

Unit 2 is responsible for the 3rd segment. The ReLU again zeros the output until x reaches the right value.The slope of the unit, 𝑤[1]2, must be set so that the sum of unit 1 and 2 have the desired slope. The bias is again adjusted to keep the output negative until x has reached 2.

The “off” or disable feature of the ReLU activation enables models to stitch together linear segments to model complex non-linear functions.

  1. What is a2 and target here?
  2. How does a neuron from 1 unit affect another unit (next) and how this is shown in the segment(s)?
  3. Is the slope of segment 3 a combined value greater than or equal to the sum of the previous 2 slopes?

You might find this thread useful.

This example makes a lot more sense, the combination of w and b for each unit helps fit the curve to the training set, the bias is negative but Relu ensures the y is never zero even though the y-intercept would be negative.

The lab should defintely be rewritten since it ends a confusing more than teaching.