Problem: Deep Neural Network, course 1, week 4

Hello,

I am currently working on Deep Neural Network, exercise 5.
I’m really annoyed because because of a function implemented initially (so not mine), I can’t get the 70/100 necessary to validate the course.

Indeed, I get this error message:
TypeError Traceback (most recent call last)
in
----> 1 parameters, costs = L_layer_model(train_x, train_y, layers_dims, num_iterations = 1, print_cost = False)
2
3 print("Cost after first iteration: " + str(costs[0]))
4
5 L_layer_model_test(L_layer_model)

in L_layer_model(X, Y, layers_dims, learning_rate, num_iterations, print_cost)
60 # parameters = …
61 # YOUR CODE STARTS HERE
—> 62 parameters=update_parameters(parameters, grads, learning_rate)
63
64 # YOUR CODE ENDS HERE

~/work/release/W4A2/dnn_app_utils_v3.py in update_parameters(parameters, grads, learning_rate)
378 # Update rule for each parameter. Use a for loop.
379 for l in range(L):
→ 380 parameters[“W” + str(l+1)] = parameters[“W” + str(l+1)] - learning_rate * grads[“dW” + str(l+1)]
381 parameters[“b” + str(l+1)] = parameters[“b” + str(l+1)] - learning_rate * grads[“db” + str(l+1)]
382

TypeError: tuple indices must be integers or slices, not str

However, I did not code the update_parameters function (it is implemented in the exercise).

I’m stuck and I don’t see how to do it.

I share with you my session code: cmxovzfu.

Thanks in advance for your help.

The mentors cannot see your assignment. It is a general principle of debugging that just because the error is thrown in code you didn’t write, that does not mean it’s not your fault. A perfectly correct subroutine can still throw errors if you pass it incorrect parameters. So what this means is that your code higher up the call stack is passing faulty values. I suggest you examine your L_layer_model code with that thought in mind.

To go one level deeper in the analysis: it looks like the variable parameters or the variable grads that you passed down to update_parameters is a python “tuple”, when it should be a dictionary. One way that can happen is if you give one variable to hold the return values from a function that returns more than one value, although I don’t see an obvious way to fall into that trap in this particular case.

I put the following two print statements right before the call to update_parameters in my L_layer_model function:

print(f"type(parameters) {type(parameters)}")
print(f"type(grads) {type(grads)}")

Here’s what I get when I run the test cell with those prints in place:

type(parameters) <class 'dict'>
type(grads) <class 'dict'>
Cost after iteration 0: 0.7717493284237686
Cost after first iteration: 0.7717493284237686

What do you see when you try that?