W 4 Building deep NN step by step

I am having this error can anyone please help me .
It says dA_prev is not defined

Hello Abhishek,

Welcome to the community.

Take a look at the implementation of the linear_activation_backward for ‘relu’. See, how you are pulling out the strings for dA. You are doing an extra layer at this stage.

Or compare the variables on the left hand side of the assignment operator in your line 41 (dA_prev_temp, dW_ temp, db_temp) with the variables used on the right hand side in following lines 42, 43, and 44 (dA_prev, dW_temp, db_temp). One of those things is not like the others. And it turns out that’s the one mentioned in the exception message.

I have checked but it stills shows the same error

Still can’t find a solution

In order for dA_prev to be assigned to something, it has to exist. Where was it created?

Yes, Abhishek.

ai_curious is right. dA_prev need to be checked in. Even I tried mentioning this in my previous reply which was very straight and close to the answer. Check the strings that you are calling for the function and you will get that right :slight_smile:

Probably I am missing something, but I don’t understand either of your answers in this thread nor believe they address the OP problem. As far as I can tell, nothing about strings or function calls is relevant.

None of dA_prev_temp , dW_temp , db_temp or da_prev are strings; they are what the Python language reference calls identifiers, or names. People commonly refer to them as variables. They are not strings, or string literals.

Line 41 in the OP error trace is an assignment statement, through which values returned by the linear_activation_backward() function are bound to the target_list comprised of three separate target entities that are the three _temp identifiers. Line 42 of the OP error trace is also an assignment statement, in which an attempt is made to bind the value of the dA_prev target on the right hand side of the assignment statement to the grads dictionary target on the left hand side. However, there is no dA_prev identifier in this context or scope. That is because the return from the linear backward function was assigned to the identifier dA_prev_temp, not dA_prev. Because no identifier or name dA_prev exists when it is used as the assignment target on the right hand side of line 42 assignment statement, the Python NameError built-in exception is raised. Cheers.

https://docs.python.org/3/library/exceptions.html?highlight=nameerror#NameError

https://docs.python.org/3/reference/simple_stmts.html?highlight=assignment#expression-statements

https://docs.python.org/3/reference/lexical_analysis.html#identifiers

Hello ai_curious,

Thanks for mentioning that :slight_smile:

However, I was pointing at the highlighted one below:


But anyways, thank you! Highly appreciate!

Here is another tidbit from the Python doc that might help with future debugging…

6.16. Evaluation order

(6. Expressions — Python 3.10.8 documentation)

Python evaluates expressions from left to right. Notice that while evaluating an assignment, the right-hand side is evaluated before the left-hand side.

my emphasis added

This means if the error is happening in an assignment statement like this one was (Line 42) start debugging on the righthand side of the = sign. There may be errors elsewhere, such as an index not found on the grads dictionary on the lefthand side, but it won’t be detected or reported until the right hand side exception is fixed. Hope this helps.

It already exists in the earlier blocks

{moderator edit - solution code removed}

str(l +2) is for the activation function in backward propagation

Those are local variables in the scope of various other functions, so that has nothing to do with what happens in the local scope of L_model_backward. If you are not familiar with how python defines the concept of the “scope” of a variable, it would be a good idea to spend some time with relevant python tutorials. Try googling “python variable scope” to find some places to start.

But even without understanding the idea of scope, just look at the code in L_model_backward that you wrote: in the previous line, you assign the first return value of linear_activation_backward to the variable name dA_prev_temp. But then you reference dA_prev in the following line. That is the bug: it should be dA_prev_temp that you use on the RHS of that assignment statement there. This has been explained several times in the conversation up to this point.

But note that the LHS of that assignment statement is also wrong: the dA value is for the previous layer relative to the dW and db values. They actually wrote the correct logic for you in the comments supplied in the template code …

Sorry it was a silly mistake, I didn’t saw the instances , but Now I am getting such error

{moderator edit - solution code removed}

So it looks like the contents of the grads dictionary that you return from L_model_backward is not correct. This is always the first rule of debugging, right? Start from the error message. What does it mean? Try printing the keys of your dictionary like this:

print(grads.keys())

Ok, now that you see what is there, you need to check your logic in L_model_backward to understand why the contents are incorrect. The gradient dA0 is not actually useful for anything, but it should be the output of the back prop step for layer 1. You need to do that because you need dW1 and db1, but you also get dA0 as a side effect. Most likely this means your “loop” logic is wrong and it is stopping too soon. But you’ll have a better idea once you see the keys that are printed by the statement above.

Debugging is part of the job of being a programmer, right? If you don’t already have the skills, you need to develop them. It’s not supposed to be our job to do your thinking for you, but what I am trying to do here is help by showing you the methods you need to use to make progress.

BTW just wanted to check that you saw and understood this part of my previous comment. If you did not fix that bug, that would exactly cause the error you’re seeing.