Course 1, Week 4, Assignment 1, Exercise 5

Let me first paste the error:


It’s telling me that the W argument in my linear_activation_forward function (written earlier in the assignment) is not defined. This (as well as b) should come from an earlier-written initialize_parameters_deep function. But shouldn’t this be passed to the function by the preinstalled testing code shown in the image?

Hello Nicklaus Millican,

Did you miss running out any of the previous cells of the notebook? Please check.

Please take a more careful look at the logic in L_model_forward. The variables W and b need to be local variables in the scope of L_model_forward, so how is that going to happen? You have to write the code to extract the appropriate values from the parameters dictionary, right? And it matters what the current layer is, so your logic needs to take that into account. The point is that you have a loop that goes through all the layers starting with input and passes the results forward. When you call a function, you are passing it specific parameter values. Of course the function has to be defined earlier, but that only defines what that function will do with the values that you pass to it. Defining a function is completely different than calling a function, right? So the question is what do you pass when you call it?

1 Like

All previous cells ran

Yes, that makes sense. As I understand the loop to work, A and cache are defined iteratively over each layer; to define layers >1, parameters W and b from the previous layer need to be accessed; I assume they need to be accessed from the most recently appended dictionary cache in the list caches, yes?

The parameters W and b are unique to each layer, right? You don’t change them during forward propagation: you only use the existing values that you were given as an argument to this function. They are passed in using a python dictionary, which is the variable called “parameters”. You simply extract the appropriate ones for the current layer you are handling. The dictionary has strings as the lookup “keys”. You saw how to access the dictionary in the “initialize_parameters_deep” function that you built earlier.

During forward propagation, you don’t need to use the cache values: you only create the caches. The caches will actually be used during back propagation. That is what they are for and you’ll see that later, but for now just pay attention to what is contained in them so that you’ll understand how to use them later.

Ah, I see. I thought this was the iteration step, but that will come later.

I think I fixed this, but a new error surfaced. The AL output is wrong in both value and dimensions.


The warning says I may be using a global variable inside the function, but I don’t think that I am. I’m confused why the AL output is 3x4

That’s just a suggestion of the type of error to look for. I would say it’s more likely an error in how you handle the output layer. That is outside the “for” loop, right?

One thing that is a good idea in a case like this is to start by working out the “dimensional analysis”, so that you know what should be happening at each layer. Please have a look at this thread and I’ll bet that will shed some light …

Hi Paul,
I appreciate your patience. I went through the thread you linked to (your walk-through of dimensional analysis was very helpful). I ultimately solved this by recognizing that the code for AL, cache differed from that for A, cache in that it needed a different A vs A_prev parameter, and that its index is L rather than lower-case “l”.

Thank you!

Congrats! It’s great that you were able to find the issues based on the dimensional analysis. Onward! :nerd_face: