DLS Course 1, Week 3, Exercise 8- assertion error

When run the exercise 8 cell, I get the same output as what’s stated as the expected output, but for some reason it gives an assertion error; it says ‘wrong values for W1’.


AssertionError Traceback (most recent call last)
in
7 print("b2 = " + str(parameters[“b2”]))
8
----> 9 nn_model_test(nn_model)

~/work/release/W3A1/public_tests.py in nn_model_test(target)
273 assert output[“b2”].shape == expected_output[“b2”].shape, f"Wrong shape for b2."
274
→ 275 assert np.allclose(output[“W1”], expected_output[“W1”]), “Wrong values for W1”
276 assert np.allclose(output[“b1”], expected_output[“b1”]), “Wrong values for b1”
277 assert np.allclose(output[“W2”], expected_output[“W2”]), “Wrong values for W2”

AssertionError: Wrong values for W1

When I asked friend who also does this course, I found I had the same code, but then she rerun her code (which had previously not raised an error) and for some reason it started raising an error for her too.

Rather confused, and wondering if something on the system changed to give the error by mistake?

Hi, @Victoria. Posting your work (your code) violates our honor code. As you see, I have removed it. What you can do is post the “traceback” (the sequence of error messages) that the test cell produces. Just insert a snapshot of that into your post. It may not appear so, but it contains useful information about what is going on higher up the stack in your previous functions.

Assuming that they have passed all their tests, my guess is that you have “hard-coded” some arguments to either the nn_model(arguments go here...). That means that you are referencing “global” variables in your nn_model() function instead of the parameters that are passed into the function as given in the notebook. This includes the prior functions that you built, that are called inside nn_model().

It would be great if you could post the traceback. Thanks!

Apologies; I did not realise posting a part of my code also violated the honor code. I have now added the traceback and will also go check if I hard-coded any arguments. Thanks for your response.

Thanks! :smiley: Let’s see what happens after your code-review for hard-coding. Have at it!

I just checked for hard-coding but I cannot seem to spot any (I might just not be able to recognise it though), and since the error message showed for my friend after she re-ran her code, when it hadn’t shown up before, I am wondering if it is a bug of some sort. Is that unlikely?

Perhaps your testing functions are of a different vintage. For example, I am working with testCases_v2.py. Check with: File → Open … . You will see the testCases_v?.py file in the list. If that checks out with your study partner’s version, you may want to just try restarting the kernel: Kernel → Restart & Run All. :thinking:

@kenb I’m facing the same error too, and I did check for any hard coded values and did not find any. The expected output and the current outputs match but I get the same error as “Wrong values for W1”.

I tried restarting and running the kernel but it still says wrong values for w1. The W1 values from the output of my code:
W1 = [[-0.65848169 1.21866811]
[-0.76204273 1.39377573]
[ 0.5792005 -1.10397703]
[ 0.76773391 -1.41477129]]

The expected W1 output:

W1 = [[-0.65848169  1.21866811]
 [-0.76204273  1.39377573]
 [ 0.5792005  -1.10397703]
 [ 0.76773391 -1.41477129]]

Sections of the testCases file that mention W1 values (I wasn’t sure which is relevant):
def forward_propagation_test_case():
np.random.seed(1)
X_assess = np.random.randn(2, 3)
b1 = np.random.randn(4,1)
b2 = np.array([[ -1.3]])

parameters = {'W1': np.array([[-0.00416758, -0.00056267],
    [-0.02136196,  0.01640271],
    [-0.01793436, -0.00841747],
    [ 0.00502881, -0.01245288]]),

def backward_propagation_test_case():
np.random.seed(1)
X_assess = np.random.randn(2, 3)
Y_assess = (np.random.randn(1, 3) > 0)
parameters = {‘W1’: np.array([[-0.00416758, -0.00056267],
[-0.02136196, 0.01640271],
[-0.01793436, -0.00841747],
[ 0.00502881, -0.01245288]]),

def update_parameters_test_case():
parameters = {‘W1’: np.array([[-0.00615039, 0.0169021 ],
[-0.02311792, 0.03137121],
[-0.0169217 , -0.01752545],
[ 0.00935436, -0.05018221]]),

def predict_test_case():
np.random.seed(1)
X_assess = np.random.randn(2, 3)
parameters = {‘W1’: np.array([[-0.00615039, 0.0169021 ],
[-0.02311792, 0.03137121],
[-0.0169217 , -0.01752545],
[ 0.00935436, -0.05018221]]),

I also tried removing the deepcopy function for the W parameters but it did not help.

Hi, Victoria.

But your failure is on nn_model_test, so why would those other functions be relevant here? Here’s that the “expected” values look like in my public_tests.py file for nn_model:

def nn_model_test(target):
    np.random.seed(1)
    X = np.random.randn(2, 3)
    Y = (np.random.randn(1, 3) > 0)
    n_h = 4
    expected_output = {'W1': np.array([[ 0.56305445, -1.03925886],
                                   [ 0.7345426 , -1.36286875],
                                   [-0.72533346,  1.33753027],
                                   [ 0.74757629, -1.38274074]]), 
                       'b1': np.array([[-0.22240654],
                                   [-0.34662093],
                                   [ 0.33663708],
                                   [-0.35296113]]), 
                       'W2': np.array([[ 1.82196893,  3.09657075, -2.98193564,  3.19946508]]), 
                       'b2': np.array([[0.21344644]]

Note that the course staff seems to have just published a new version of that file about 20 hours ago. Maybe the update did not happen for you? It might be worth deleting all the “dot py” files in that exercise and then doing “Help → Lab Help → Get Latest Version” to make sure everything is consistent. Note that procedure only replaces missing files, which is why we need to delete them first. If you leave your notebook in place, then it will not get replaced.

@Victoria_Chan It looks like either you didn’t get the updated notebook or maybe you just renamed your previous version back to the “standard” name. That doesn’t work. They made some subtle changes that you miss when you do that. The most important one is that they commented out the setting of the random seed in the initialize_parameters function and moved that logic to the test block for that function. As a result of that change, you get different answers when you run the nn_model_test cell. But they forgot to change the “expected values” shown in the text. So the mere fact that your shown W1 values agree with those shown proves that you have the old code.

You’ll need to do the “Get a fresh copy” procedure documented on the FAQ Thread for the notebook. Then carefully “copy/paste” just your solution code in the “START HERE/END HERE” blocks. Don’t copy over the whole function text or you’ll be duplicating your mistake.

In the meantime, I will file a bug about the fact that the “expected values” are wrong, which probably means we’ll all have to do this “copy/paste” exercise again sometime in the next 24 hours. Sigh …

I have the same problem. Even after I have updated my notebook. BTW, I noticed there is no indication on what the learning rate should be. Would this affect the answer? I believe so. So, what should the correct/expected learning rate be?

I also found my output where the cost becomes NaN on the second printout.

The following is my output:

Cost after iteration 0: 0.693198
Cost after iteration 1000: nan
Cost after iteration 2000: nan
Cost after iteration 3000: nan
Cost after iteration 4000: nan
Cost after iteration 5000: nan
Cost after iteration 6000: nan
Cost after iteration 7000: nan
Cost after iteration 8000: nan
Cost after iteration 9000: nan
W1 = [[ -16151.48761099 40094.87691492]
[-360708.80502007 896694.73776285]
[ 17194.29194027 -42664.68066471]
[ 209432.69985165 -519775.95648378]]
b1 = [[ -9454.14941358]
[-211131.57910882]
[ 10053.6841467 ]
[ 122511.81043522]]
W2 = [[ -9.46131057 -211.85626202 10.08675527 122.71208997]]
b2 = [[1998.68544]]

My bad, I had a variable naming mistake. After correction, it passed the test. Thanks.

It’s great that you were able to find the mistake under your own power. As I’m sure you also figured out, the learning rate just ends up using the default value that is declared in the function definition of update_parameters. Onward! :nerd_face:

Hi Paul, sorry for bothering you. I have the same exact output from my code, with a lot of nan and the exact same weights. Could please tell me where your error was, please?