Can someone explain to me in DL week4 assignment why do we need to replicate L-1 copies of previous activations

It is the assignment 1 in the first course.

Blockquote
For even more convenience when implementing the 𝐿L-layer Neural Net, you will need a function that replicates the previous one (linear_activation_forward with RELU) 𝐿−1L−1 times,

I think maybe the instructions are worded in a confusing way. It’s really not that complicated: the point is that in an L layer network, we have L - 1 “hidden” layers, which in our example use the ReLU activation function, followed by the output layer (layer L) that uses sigmoid as the activation, since we are doing a “binary” classification here. We only need to implement the general function linear_activation_forward once and then we invoke it L times in an L layer network. That’s what happens in the L_model_forward function that we build here in the Week 4 Assignment 1. Then we put everything together into a complete network in the next assignment in Week 4.