I understood that the previously defined propagate function only returns grads and costs. I believe that I shouldn’t touch this function already approved in exercise 5.
So how to apply the gradient descent in a single line, as suggested in the exercise (“params, grads, costs = …”)?
What I’m not understanding?

I managed to assign the optimize function to the variables params, grads and costs, but I was unsure:

whether the function parameters should be related to the training set or the test set. I think it’s training, right?

the other parameters, I was unsure whether they should follow those of the original function (num_iterations=100, learning_rate=0.009) or the “model” function we are working on (num_iterations=2000, learning_rate=0.5).

I tried both sets with both learning rates but I always get the error: “AssertionError: Wrong values for d[‘w’].”

I was rewriting the initialize_with_zeros function again (“w, b = np.zeros((X_train.shape[0],1)), 0”, but thanks for the tip! Now I parameterized the function with X_train.shape[0].

My optimize function now is defined as “(w, b, X_train, Y_train)”.

I believe you quote another part of the exercise:
" # Predict test/train set examples (≈ 2 lines of code)
# Y_prediction_test = …
# Y_prediction_train = …"

My difficulty is in this section:
" #(≈ 1 line of code)
# Gradient descent
# params, grads, costs = …"
where, as we are seeing, I need the optimize function, which asks for data (X) and labels (Y) sets, right?

Exercise 7 creates the predict() function. Your code there should compute the A value, and then test it to set the Y_prediction value.

The code there that uses np.zeros(…) is only to create the initial Y_prediction variable with the appropriate size. That isn’t how you compute the predictions.

In Exercise 8 (the model() function), you should call the predict() function.