L_model_forward why floor division by 2

I would like to understand the reason behind doing floor division by 2 to find the number of layers in the neural network:

def L_model_forward(X, parameters):
    """
    Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
    
    Arguments:
    X -- data, numpy array of shape (input size, number of examples)
    parameters -- output of initialize_parameters_deep()
    
    Returns:
    AL -- activation value from the output (last) layer
    caches -- list of caches containing:
                every cache of linear_activation_forward() (there are L of them, indexed from 0 to L-1)
    """

    caches = []
    A = X
    L = len(parameters) // 2                  # number of layers in the neural network

Answer lies in understanding the shape of the variable parameters and what the function call to len(parameters) returns

PS: in general you will get more targeted help if you create the topic within the Course-Week forum structure.

As ai_curious points out, this was created in a generic subforum. I moved this thread to DLS Course 1 for you.

The point is that there are 2 entries in the parameters dictionary for every layer, right? One for W^{[l]} and one for b^{[l]}.