DLS Course 1 Week 4 exercise 9 L_model_backward

I don’t understand where the expected dA0 comes from

my three code blocks I implemented were as follows.

First, initialize dAL

{moderator edit - solution code removed}

Then compute the gradients for the output layer L.

{moderator edit - solution code removed}

Then iterate through the previous layers.

{moderator edit - solution code removed}

I end up with the keys of grads being

dict_keys(['dA2', 'dW2', 'db2', 'dA1', 'dW1', 'db1'])

But the test is looking for

print("dA0 = " + str(grads['dA0']))
print("dA1 = " + str(grads['dA1']))
print("dW1 = " + str(grads['dW1']))
print("dW2 = " + str(grads['dW2']))
print("db1 = " + str(grads['db1']))
print("db2 = " + str(grads['db2']))

Where the numeric indices of the dAx values are off by one from the dWx and dbx values.
I’m figuring that I’m doing something really stupid, but can’t seem to see it.

Any help or hints?

In any given layer, when you call linear_activation_backward, the output dA value is for the previous layer and the dW and db values are for the current layer. You have that wrong on both the output layer and the hidden layers.

That’s why they call it dA_prev_temp, right?

Thanks so much. I knew it had to be a boneheaded mistake, but just couldn’t see it.