I have implemented the required code for “two_layer_model” in the Exercise 1, When the cell below the function is called, I get the following error:
Error: Wrong output for variable W1.
Error: Wrong output for variable b1.
Error: Wrong output for variable W2.
Error: Wrong output for variable b2.
3 Tests passed
1 Tests failed
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-48-f9ec5304d38d> in <module>
3 print("Cost after first iteration: " + str(costs[0]))
4
----> 5 two_layer_model_test(two_layer_model)
~/work/release/W4A2/public_tests.py in two_layer_model_test(target)
75 ]
76
---> 77 multiple_test(test_cases, target)
78
79
~/work/release/W4A2/test_utils.py in multiple_test(test_cases, target)
140 print('\033[92m', success," Tests passed")
141 print('\033[91m', len(test_cases) - success, " Tests failed")
--> 142 raise AssertionError("Not all tests were passed for {}. Check your equations and avoid using global variables inside the function.".format(target.__name__))
143
AssertionError: Not all tests were passed for two_layer_model. Check your equations and avoid using global variables inside the function.
However cost of iteration are exactly what they are supposed to be:
Cost after iteration 1: 0.6926114346158595
Cost after first iteration: 0.693049735659989
Cost after iteration 1: 0.6915746967050506
Cost after iteration 1: 0.6915746967050506
Cost after iteration 1: 0.6915746967050506
Cost after iteration 2: 0.6899956541494738
Furthermore when this model is run on the training data and predict data for the images I get the exactly the same accuracy and cost as in the output.
Cost after iteration 0: 0.693049735659989
Cost after iteration 100: 0.6464320953428849
Cost after iteration 200: 0.6325140647912677
Cost after iteration 300: 0.6015024920354665
Cost after iteration 400: 0.5601966311605747
Cost after iteration 500: 0.5158304772764729
Cost after iteration 600: 0.4754901313943325
Cost after iteration 700: 0.43391631512257495
Cost after iteration 800: 0.4007977536203886
Cost after iteration 900: 0.3580705011323798
Cost after iteration 1000: 0.3394281538366413
Cost after iteration 1100: 0.30527536361962654
Cost after iteration 1200: 0.2749137728213015
Cost after iteration 1300: 0.2468176821061484
Cost after iteration 1400: 0.19850735037466102
Cost after iteration 1500: 0.17448318112556638
Cost after iteration 1600: 0.1708076297809692
Cost after iteration 1700: 0.11306524562164715
Cost after iteration 1800: 0.09629426845937156
Cost after iteration 1900: 0.0834261795972687
Cost after iteration 2000: 0.07439078704319085
Cost after iteration 2100: 0.06630748132267933
Cost after iteration 2200: 0.05919329501038172
Cost after iteration 2300: 0.053361403485605606
Cost after iteration 2400: 0.04855478562877019
Cost after iteration 2499: 0.04421498215868956
If the model is working fine on the training and test data and giving the same results as mentioned in the “Expected Output” then Why am I getting the error when two_layer_model_test(two_layer_model) is executed.
Hi @ahsanjc and welcome to the Specialization. I suspect that you might be getting an unexpected datatype for the cost being returned by your function. Note the AssertionError at the beginning of the traceback. In particular, the first element in the list of costs, i.e. costs[0] (which is being converted to a Python string for reporting). The expected datatype for cost[0] is np.ndarray.
You can check this by first highlighting the test cell that produces the error and then selecting Insert Cell Above from the Insert menu. You can then copy the execution statement of the function into the new cell and execute it there. Add a line to the cell to check the datatype: type(costs[0]). It should return “numpy.ndarray”.
Sometime the advice of the grader can be a bit vague, but in this case, its guidance is mostly sound. That’s because the Exercise largely consists of applying functions (with which you are familiar from the previous assignment), within the new two_layer_model function. The arguments to those functions cannot reference global variables, but rather those that make up the arguments of two_layer_model.
Hi I am getting the same error and my type(costs[0]) is also numpy.ndarray. The grader still gave me:
Error: Datatype mismatch.
Error: Wrong shape
Error: Wrong output
Error: Wrong output
I have runned the programs and also validated many times of the Assignment “Programming Assignment: Deep Neural Network - Application—Week 4” but during grading the output is 0/100 .
Please help !!
[ValidateApp | INFO] Validating ‘/home/jovyan/work/submitted/courseraLearner/W4A2/Deep Neural Network - Application.ipynb’ [ValidateApp | INFO] Executing notebook with kernel: python3 [ValidateApp | ERROR] Timeout waiting for execute reply (30s). [ValidateApp | ERROR] Interrupting kernel [ValidateApp | ERROR] Timeout waiting for execute reply (30s). [ValidateApp | ERROR] Interrupting kernel Tests failed on 2 cell(s)! These tests could be hidden. Please check your submission. ========================================================================================== The following cell failed: parameters, costs = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y)… print("Cost after first iteration: " + str(costs[0])) two_layer_model_test(two_layer_model) The error was: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) in ----> 1 parameters, costs = two_layer_model(train_x, train_y, layers_dims = (n_x, n… 2 3 print("Cost after first iteration: " + str(costs[0])) 4 5 two_layer_model_test(two_layer_model) ValueError: too many values to unpack (expected 2) ========================================================================================== The following cell failed: parameters, costs = L_layer_model(train_x, train_y, layers_dims, num_iterations = 1… print("Cost after first iteration: " + str(costs[0])) L_layer_model_test(L_layer_model) The error was: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) in ----> 1 parameters, costs = L_layer_model(train_x, train_y, layers_dims, num_iterat… 2 3 print("Cost after first iteration: " + str(costs[0])) 4 5 L_layer_model_test(L_layer_model) ValueError: too many values to unpack (expected 2)
It looks like you have probably copied old solutions from the Internet. If so, that is a) cheating and b) doesn’t work, because the course assignments were significantly rewritten in April of 2021.
I think you should do as Nobu suggests and start with a clean notebook and then implement your own solutions based on the instructions in the notebook. There is a topic on the DLS FAQ Thread about how to get a clean copy of any of the notebooks.
I also get an error in the Week4, Assignment 2, Exercise 1 (Exercise 2 in this notebook passed grader test) . I am attaching part of my notebook with the code and error messages. In my case type(costs[0]) is numpy.ndarray. Could you please help me figure out where is the problem?
@Aygul_Ishemgulova note that you have used initialize_parameters_deep in the two_layer_model case. That may not be incorrect from a logical standpoint, but it will give you a different answer because they use a different “init” routine for the “deep” case there. Please try following the instructions and using initialize_parameters instead and see if that fixes the problem.
To call linear_activation_forward(), you need to specify an activation function. Remember that its interface is “string”. But you passed a variable, relu. Please ensure to make it a string, “relu”.
In linear_activation_forward(), it can not find an appropriate string value, i.e., can not process activation function, which resulted in A is not assigned.
AssalamoAlaikum Hope you are doing well.In exercise 1 of assignment 2,week 4 we are supposed to use linear_activation_forward for A1,cache1 and A2,cache2 ? We are supposed to use linear_activation_backward for dA1,dW2,db2 and dA0,dW1,db1 ? Kindly help me in this regard.
The order of the activation function for backward propagation should be the reverse order of that for forward propagation. Yours are in the same order.