Help ! An error come from the source code that the course provided

In the Assignment 2 in week 2 , I completed many parts. But in one part there was an error in the code that they gave me . Please tell me what can I do for that?

What does the error say?

in optimize(w, b, X, Y, num_iterations, learning_rate, print_cost)
40
41 # Retrieve derivatives from grads
—> 42 dw = grads[“dw”]
43 db = grads[“db”]
44

IndexError: only integers, slices (:), ellipsis (...), numpy.newaxis (None) and integer or boolean arrays are valid indices

grads which is one of the returns of propagate functions needs to be a dictionary! And the keys of the dictionary should be “dw” and “db”.

Also you need to use deepcopy for w and b if you are not doing so already!

it is not working

Please share your full error.

IndexError Traceback (most recent call
last) in ----> 1 params, grads,
costs = optimize(w, b, X, Y, num_iterations=100, learning_rate=0.009,
print_cost=False) 2 3 print ("w = " + str(params[“w”]))
4 print ("b = " + str(params[“b”])) 5 print ("dw = " +
str(grads[“dw”]))
in optimize(w, b, X, Y,
num_iterations, learning_rate, print_cost) 40 41 #
Retrieve derivatives from grads—> 42 dw = grads[“dw”] 43
db = grads[“db”] 44
IndexError: only integers, slices (:), ellipsis (...),
numpy.newaxis (None) and integer or boolean arrays are valid indices

I think you have changed some of the pre-written code. The original code is:

# GRADED FUNCTION: optimize

def optimize(w, b, X, Y, num_iterations=100, learning_rate=0.009, print_cost=False):
    """
    This function optimizes w and b by running a gradient descent algorithm
    
    Arguments:
    w -- weights, a numpy array of size (num_px * num_px * 3, 1)
    b -- bias, a scalar
    X -- data of shape (num_px * num_px * 3, number of examples)
    Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
    num_iterations -- number of iterations of the optimization loop
    learning_rate -- learning rate of the gradient descent update rule
    print_cost -- True to print the loss every 100 steps
    
    Returns:
    params -- dictionary containing the weights w and bias b
    grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
    costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
    
    Tips:
    You basically need to write down two steps and iterate through them:
        1) Calculate the cost and the gradient for the current parameters. Use propagate().
        2) Update the parameters using gradient descent rule for w and b.
    """
    
    w = copy.deepcopy(w)
    b = copy.deepcopy(b)
    
    costs = []
    
    for i in range(num_iterations):
        # (≈ 1 lines of code)
        # Cost and gradient calculation 
         grads, cost = ...
        
        # Retrieve derivatives from grads
        dw = grads["dw"]
        db = grads["db"]
        
        # update rule (≈ 2 lines of code)
         w = ...
         b = ...
        
        # Record the costs
        if i % 100 == 0:
            costs.append(cost)
        
            # Print the cost every 100 training iterations
            if print_cost:
                print ("Cost after iteration %i: %f" %(i, cost))
    
    params = {"w": w,
              "b": b}
    
    grads = {"dw": dw,
             "db": db}
    
    return params, grads, costs