Week 4, assignment 2, two_layer_model


i used the given function but it doesn’t work and i don’t know what i miss in the code

This is the function signature:
linear_activation_forward(A_prev, W, b, activation)

You are calling like this:
linear_activation_forward(W, b, A_prev, activation)

1 Like

And once you fix the order of arguments that Balaji has pointed out, also note that you are referencing the global variable train_x instead of the formal parameters of the function. It is always a mistake to reference global variables. That happens to be the variable that is passed in one of the test cases, but not all of them, right?

1 Like

thanks, i followed your advice but now the parameters’ shapes don’t correspond



can you help me again?

What are the dimensions of the inputs? What should the W1, b1 and W2 dimensions be? Why are they different? Note that the only place you change the W or b values is in the update_parameters step, but the inputs to that (the gradients) are the result of back propagation. So either your arguments to back propagation or update parameters must be incorrect.

Note that the subroutines themselves are correct, so it’s how you are calling them.

I added print statements at the beginning of two_layer_model and then at the end to print the shape of W1. Here’s what I get when I run that test cell:

After initialize parameters W1.shape (7, 12288)
Cost after iteration 1: 0.6926114346158595
Before return W1.shape (7, 12288)
Cost after first iteration: 0.693049735659989
After initialize parameters W1.shape (4, 10)
Cost after iteration 1: 0.6915746967050506
Before return W1.shape (4, 10)
After initialize parameters W1.shape (4, 10)
Cost after iteration 1: 0.6915746967050506
Before return W1.shape (4, 10)
After initialize parameters W1.shape (4, 10)
Cost after iteration 1: 0.6915746967050506
Before return W1.shape (4, 10)
After initialize parameters W1.shape (4, 10)
Cost after iteration 2: 0.6524135179683452
Before return W1.shape (4, 10)
 All tests passed.

So a couple of important points to note there:

  1. The shapes at the beginning and the end are the same.
  2. Different test cases have different shapes, so it’s important not to make any hard-coded assumptions.

sorry but i controlled many times the variables and it seems correct, i don’t understand also how the shapes become (4, 10) if we have n_x=12288 and n_h=7

As I pointed out in my earlier response, there are multiple test cases and not all of them use the “real” image data as the inputs. We are trying to write “general” code here, which will work with whatever dimensions of inputs. It is a mistake to “hard-code” the dimensions.

I suggest you click “File → Open” and then open the file public_tests.py and examine the test code. You can see the name of the function in your earlier exception trace: two_layer_model_test.

1 Like

(Moderator edit: Solution code removed)

This is my code, i’ve spent the last three days to control and rewrite the code but i can’t find my error.
I’ve examined also the public_test.py where i the parameters’ shape but i don’t understand why the parameters’ shape change if the neural network’s nodes are the same.


Can you explain me my error cause i can’t see it and i want to continue with the 2nd course asap.

Hi, @Simone_Aquilino. I have removed your solution code from your post. You are not permitted to post your solution code according to the Coursera Honor Code–which you signed off on and therefore, have agreed to abide by.

FWIW, some advice. “ASAP” is part of the problem here. Being anxious to burn through the course material will distract you from your task at hand. That task is understanding. Advancing to Course 2 without that will only amplify your frustrations. Grab a cool drink, take some deep breaths, and carefully consider all of the explanations offered by @paulinpaloalto.

This can be fun! :thinking: :nerd_face: :smiley:

Hi @Simone_Aquilino, well, I agree to what @kenb sir has mentioned. Still, I would like you to just check the activation part that you are using for linear_activation_backward. You are just doing the reverse of it. Thanks!

If @Rashmi’s conjecture is correct, then hopefully it’s a simple typo due to the “ASAP” procedure. In backward propagation, you are reversing the order of computations done for forward propagation. In other words, you are starting with the final layer and proceeding backward to the first layer. You need the output of the network to initialize your gradient descent calculations.

So what is the final layer activation? The sigmoid function, right? Forward/backward propagation is the insight that made deep learning possible. So it’s worthy of somme deep (Hah!) reflection. Enjoy! :nerd_face: