Not all the tests are passed deep learning NN function

Hi

I am running several codes for “Building_your_Deep_Neural_Network_Step_by_Step” for the " Neural Networks and Deep Learning" course is just keep getting the error of not all the tests are passed as copied below:

With sigmoid: dA_prev = [[ 0.01755433 -0.14094251]
[ 0.01508293 -0.12109981]
[-0.00915014 0.07346581]]
With sigmoid: dW = [[ 0.12698275 -0.04119062 -0.08712197]]
With sigmoid: db = [[0.05831463]]
With relu: dA_prev = [[ 0.44090989 0. ]
[ 0.37883606 0. ]
[-0.2298228 0. ]]
With relu: dW = [[ 0.44513824 0.37371418 -0.10478989]]
With relu: db = [[-0.20837892]]
Error: Wrong output with sigmoid activation for variable 0.
Error: Wrong output with sigmoid activation for variable 1.
Error: Wrong output with sigmoid activation for variable 2.
5 Tests passed
1 Tests failed

AssertionError Traceback (most recent call last)
in
11 print("With relu: db = " + str(t_db))
12
—> 13 linear_activation_backward_test(linear_activation_backward)

~/work/release/W4A1/public_tests.py in linear_activation_backward_test(target)
467 ]
468
→ 469 multiple_test(test_cases, target)
470
471

~/work/release/W4A1/test_utils.py in multiple_test(test_cases, target)
140 print(‘\033[92m’, success," Tests passed")
141 print(‘\033[91m’, len(test_cases) - success, " Tests failed")
→ 142 raise AssertionError(“Not all tests were passed for {}. Check your equations and avoid using global variables inside the function.”.format(target.name))
143

AssertionError: Not all tests were passed for linear_activation_backward. Check your equations and avoid using global variables inside the function.

Anyone has the same experience how to fix it? The outputs are the same as expected though.

Thanks
Faeze

Comparing your results to my results and the expected results, notice that the results for the “relu” case match, but it is the values in the sigmoid case that do not match. What is different between the implementations in the two cases?




Here are the screenshots of the quiz numbers I got error. It looks like the outputs are the same as expected but with a bit difference. I have no idea about the reason. Can you please help me to fix?

Thanks
Faeze

I added a few print statements to my L_model_forward code and here’s what I see:

Inner loop l = 1, A.shape (4, 4)
Inner loop l = 2, A.shape (3, 4)
l = 3
A3 = [[0.03921668 0.70498921 0.19734387 0.04728177]]
A3.shape = (1, 4)
AL = [[0.03921668 0.70498921 0.19734387 0.04728177]]
Inner loop l = 1, A.shape (4, 4)
Inner loop l = 2, A.shape (3, 4)
l = 3
A3 = [[0.03921668 0.70498921 0.19734387 0.04728177]]
A3.shape = (1, 4)
Inner loop l = 1, A.shape (4, 4)
Inner loop l = 2, A.shape (3, 4)
l = 3
A3 = [[0.03921668 0.70498921 0.19734387 0.04728177]]
A3.shape = (1, 4)
Inner loop l = 1, A.shape (4, 4)
Inner loop l = 2, A.shape (3, 4)
l = 3
A3 = [[0.03921668 0.70498921 0.19734387 0.04728177]]
A3.shape = (1, 4)
 All tests passed.

Your AL value looks different than I would expect: it appears to be a list of two arrays. The first element of the list is the correct answer for AL, but then the question is where that second array came from? Note that AL just the first return value from the call to linear_activation_forward for the output layer. It should just be a numpy array, not a list of arrays. There are two return values from that function, right? So you have to separately assign them to variables.

For the linear_activation_backward case, notice that you have the same syndrome that you had in linear_activation_forward: your values are correct for the “relu” case and incorrect for the “sigmoid” case. I assume you were able to fix the issue with linear_activation_forward. Does the nature of the error you made there shed any light on the “backward” case?

There is no point in talking about L_model_backward until you fix whatever the problem is with linear_activation_backward, right? The top level routine will call the lower level routine, so there’s no way the top level routine can pass the tests if the lower level routine is not correct.

1 Like

Hi

Thank you for checking the errors and response. I agree with you about that is happening about the “linear_activation_backward”. However, I am just confused how and why the errors come. I have no clue to fix it. Can you please check the codes for these two quizzes?

Thanks
Faeze

Sure, I can help in that way, but I can’t actually see your code directly. I will send you a DM (private message) about how to proceed with that.