Week 4 assignment deep neural network application

{moderator edit - solution code removed}

WOULD YOU PLEASE TELL ME WHAT IS THE ERROR IN THIS CODE,
THANK YOU.

TypeError Traceback (most recent call last)
in
----> 1 parameters, costs = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y), num_iterations = 2, print_cost=False)
2
3 print("Cost after first iteration: " + str(costs[0]))
4
5 two_layer_model_test(two_layer_model)

TypeError: iteration over a 0-d array

how to correct this sir?

@Doreen: The error message is telling you that the costs array returned by your code is empty. The logic you show looks correct to me, but it’s a bit hard to tell about the indentation because of the way you formatted it. Indentation is part of the syntax of python. One possibility I can think of is that you “outdented” the if statements that do the printing of costs and append the cost value so that they are not part of the “for” loop over the iterations.

@Doreen: Actually I tried the experiment of making the mistake I described and I get a slightly different error message than the one you got. Can you please “copy/paste” the full output of running the failing test so that we can examine it in more detail?

yes sir ,

after i cancelled the ‘return cost’ it gave this follow up error sir

this is with respect to 2 layer model

It is a mistake not to return the costs value. That is what that error is telling you. Note that the test cell here is not modifiable, so you have to return the costs.

Even after returning costs , assertion error is coming

Hello @ASHISH_VASHIST, the message is clear. Please check your equations carefully. You are modifying the global variables inside the equation.

Thanks!

Yes, as Rashmi says, there must be something wrong with your logic. The global variables thing is just one possible mistake. But you need to check all your logic carefully. Also make sure you are not “hard-coding” any of the parameters like learning rate when you call update_parameters from your model function.

Also note that your screen shot does not really show us the full output of the test. There should be more specific information about which test failed if you look earlier on that trace.

1 Like

Hello Everyone!

I have something dimensions mismatch going on in L-layer part but could not debug it. I also wonder who my all previous test codes are passed but now I have error in calling one of the previous function. I added couple of print statements to check shape of W, b and A.
I would appreciate if anyone can help.
Please see the error below.
ValueError Traceback (most recent call last)
in
1 t_X, t_parameters = L_model_forward_test_case_2hidden()
----> 2 t_AL, t_caches = L_model_forward(t_X, t_parameters)
3
4 print("AL = " + str(t_AL))
5

in L_model_forward(X, parameters)
41 parameters[“W”+str(L)],
42 parameters[“b”+str(L)],
—> 43 “sigmoid”)
44 caches.append(cache)
45

in linear_activation_forward(A_prev, W, b, activation)
20 #(≈ 2 lines of code)
21 # YOUR CODE STARTS HERE
—> 22 Z, linear_cache = linear_forward( A_prev , W , b)
23 A, activation_cache = sigmoid(Z)
24 # YOUR CODE ENDS HERE

in linear_forward(A, W, b)
20 print(“A=”, A.shape)
21 print(“b=”, b.shape)
—> 22 Z = np.dot ( W,A) + b
23
24 # YOUR CODE ENDS HERE

<array_function internals> in dot(*args, **kwargs)

ValueError: shapes (1,3) and (4,4) not aligned: 3 (dim 1) != 4 (dim 0)

It looks like your logic for handling the output layer after you fall out of the “for” loop over the hidden layers is wrong. You are using an incorrect A value as the input to that step.

Here’s a thread which shows how to do the “dimensional analysis” for this particular test case, so that you can recognize what should be expected at each layer.

1 Like

Thank you for your help.
Yes it was the issue with the for loop indentation. My Lth layer was following the for loop. So in each iteration the variables were changing the values.

Thanks Paul

Thanks infomasion