We see in the method initialize_parameters_deep we set L=len(layer_dims)
As I understood here layer_dims contains the dimension of the input layer also.

Next in the method L_model_forward we see it takes input parameters and we calculate L=len(parameters) // 2

I am asking from uniformity and coding consistency perspective wouldn’t it be much nicer if we always use layer_dims to calculate how many layers we have?

I agree, the authors should refine the course and remove many inconsistencies. For the time being, we just need to keep guessing what they actually assume.

‘layer_dims’ and ‘parameters’ are input parameters of two different functions. A function can only work with what the input parameters are given. In the next lab, neural network application, you will see how these two functions, initialisze_parameters_deep() and L_model_forward(), are used and why the layer dimension is extracted differently.