C2_W1 errors in function w1_unittest.test_f_of_omega(f_of_omega)

Hi - I’m getting an error that others have posted about, figured I’d post as well to elevate this issue and ensure it gets the prompt attention it deserves.

When attempting to run the unittest check on f_of_omega, I’m getting what I think is a bogus error, as the two matrices are identical and no error should be given. Screen cap below with the line of code being tested left out of the screen cap.

Similarly, when submitting the (incomplete) document for partial grading, both of the first two exercises come back with zero points – which is incorrect. I suspect there was an update made to the grading codebase and it’s got a bug in it.

Please advise: if the grading .py file needs to be deleted and reloaded into the environment, etc. I can also delete the notebook and start over, as it’s just a few lines of code. Thank you!

I’m replying to my own post, I know. But I am trying to think about debugging w1_unittest.py myself, since this is where the exercise checking is done – and in reviewing the code I’m wondering if the issue might be small floating-point differences causing failure here? We set the earlier variables to float32 in the first exercise of this assignment.

If that’s the case, and if I’m correct, this error might be resolved by using a tolerance on np.allclose to ensure floating-point differences don’t cause failures. For example, noting the tolerance addition to the code as an example:

assert np.allclose(result, test_case["expected"], atol=1e-5)

I’m going to move the files into my own environment to test this and will report back.

1 Like

I was correct. Two errors in the w1_unittest.py code.

First error: When defining the variables for the default_check test cases, those test cases are not specifically defined as float32. This will cause a mismatch error.

The correct code should have a dtype= definition for float32 at the end, as illustrated below – since you’re asking us users to define data as float32:

# Variables for the default_check test cases.
prices_A = np.array([
    104., 108., 101., 104., 102., 105., 114., 102., 105., 101., 109., 103., 93., 98., 92., 97., 96.,
    94., 97., 93., 99., 93., 98., 94., 93., 92., 96., 98., 98., 93., 97., 102., 103., 100., 100., 104.,
    100., 103., 104., 101., 102., 100., 102., 108., 107., 107., 103., 109., 108., 108.,
], dtype=np.float32)
prices_B = np.array([
    76., 76., 84., 79., 81., 84., 90., 93., 93., 99., 98., 96., 94., 104., 101., 102., 104., 106., 105.,
    103., 106., 104., 113., 115., 114., 124., 119., 115., 112., 111., 106., 107., 108., 108., 102., 104.,
    101., 101., 100., 103., 106., 100., 97., 98., 90., 92., 92., 99., 94., 91.
], dtype=np.float32)

Second, to ensure that small floating point differences do not cause an error, a tolerance should be added such that the test looks like this:

assert np.allclose(result, test_case["expected"], atol=1e-5)

Doing so will resolve the errors and create the intended passing result when a student (ie, me) provides the correct input. See below screen cap for the updated result when executing this file in Google Colabs with the changes made to the w1_unittest.py file.

Can you please update the codebase to reflect this so we can proceed with this assignment? Thank you.

1 Like

Mentor Tom has already filed a bug about this. We have no control over when the course staff will react. I did not have any trouble passing the tests in the notebook and the grader without making any of the changes to the unit tests that you show. So this must say that there is more than one logically correct way to write this code. If you want to get full grades before the course staff responds here (which could take weeks), you might want to consider other equivalent ways to write the code which have better rounding behavior. If it’s not clear how to do that, we can have a private conversation and look at your actual code.

In general it doesn’t work to submit to the grader when you have not completed all the sections of the given assignment. Frequently the template code is not syntactically valid as given, until you supply the solution. If the grader gets any kind of syntax error or exception anywhere in the notebook, then it can’t execute any of the tests and no scores will be given. 0 for all sections, even if some of them are correct.

But note that I have not really tried using the “grade up to here” mechanism. It is only implemented in some of the courses here, so I just avoid it as a matter of principle. :grinning:

I used import inspect then a print statements to get the w1_unittest code. There appears to be multiple errors. There may be more errors than just the f_of_omega code. I’m not sure that the f_of_omega is a tolerance issue. But the w1_unittest.test_dLdOmega_of_omega_array(dLdOmega_of_omega_array) appears to may be a tolerance issue. See Below est case “default_check”. Wrong output of dLdOmega_of_omega_array for omega_array =
[0. 0.001 0.002 … 0.998 0.9990001 1. ]
Test for index i = 400.
Test case “extra_check”. Wrong output of dLdOmega_of_omega_array for omega_array =
[0. 0.1 0.2 0.3 0.4 0.5
0.6 0.7 0.8 0.90000004 1. ]
Test for index i = 5.
6 Tests passed
2 Tests failed

1 Like

That’s one way to see the test code. You can also click “File → Open” and then open the file directly.

“I did not have any trouble passing the tests in the notebook and the grader without making any of the changes to the unit tests that you show.” The software flagged my work as failing and will not let me go forward.

Something about the recent change to fix a different issue in this unit test (via a different ticket - adding a tolerance band so that equivalent mathematical implementations will be accepted) has caused unintended failures in other tests for this function.

It’s being investigated.

Thanks to those who have dug into the issue.

1 Like

@paulinpaloalto Thanks Paul.

In re: 1, I followed the instructions for partial grading as given here: Coursera | Online Courses & Credentials From Top Educators. Join for Free | Coursera. Are you saying in your response that this isn’t a valid approach to assess the first two exercises? I’m a little confused, if so, admittedly.

In re: 2, hmmmm. I’ll go back and double check on that front, of course. But I did write a function to compare the matrices and got no differences observed at the machine level. Not sure what you might be seeing. If I can’t figure that out I’ll message you privately. Thanks.

1 Like

I’m not sure whether the partial grading method works for this course.
I never recommend using it.

1 Like

There was no new bug injected into the grader - the grader has the quirk of stopping dead whenever it gets a runtime error, and throwing misleading error messages for the entire assignment.

I call it a mis-feature. It’s part of Coursera’s grader design.

1 Like

Sorry! I need to apologize on a couple of issues:

  1. I am not running the latest version of this exercise with the recent changes that Tom describes, so most of what I said above is probably not relevant.
  2. I also didn’t examine your first post carefully enough and was comparing two different sets of output.

On the “Grade up to here” issue, that has never worked for me, but maybe it does here in these courses. It is not implemented in all the DLAI courses, but they seem to be saying it should work here. Your evidence suggests otherwise, though. More investigation required.

Sorry. I will try to be more careful in the future to make sure my comments are relevant. I spend most of my time on DLS rather than M4ML.

1 Like

The recent change to the unit test for f_of_omega is highlighted here (adding the abs_tol value): This was the only change to the entire w1_unittest.py file - adding the part after the comma.


1 Like

Thanks @TMosh – I wasn’t able to fully resolve the error until I explicitly defined each of the test cases prices_A and prices_B as float32 using the above argument. Once I did that, the error resolved immediately. Hope this helps.

(Just to be clear, I’m referencing the w1_unittest.py file here. Once that file has the test values set explicitly to match the values the user is instructed to set their own values to in the assignment, the error resolved for me.)


Just to verify what you’re running:
If you open the w1_unittest.py file, do you see the same code in the image I posted above in the test_f_of_omega() function?

@TMosh not exactly, as I’m running the w1_unittest.py file right now that I edited to quash some bugs (noted above). I can obviously just rename it and reload the environment to bring a new w1_unittest.py file in, if you’d like.

(Side note: I’m pretty sure, but not entirely, that np.allclose doesn’t accept abs_tol as a parameter as shown above in your image. That parameter is specific to isclose, I believe. The equivalent in np.allclose is atol, which is what I used.)

Just LMK, I can bring in an updated .py file for the enviroment if you’d like.

Yes, I noticed the use of abs_tol, and posted a question to the DLAI staff about it on the support ticket.

@TMosh If it’s okay, I can message you privately and include the .py file I used. I was able to get through the entire assignment and pass all tests after making the two edits I noted on this thread. Just LMK if you want that file.

NOW the challenge is the grader is failing and saying 0/20 everywhere. Neat. I don’t think I have ready access to that code to debug it.

For those who find this thread later - the issue with the grader was some added cells and debugging output that had been added to the notebook.


It appears that another update to the w1_unittest.py has been published.

1 Like