Week 3 Grade stuck at 80/100

Hi

I completed the Tensor Flow Programming Tasks and all test messages I got was “all test passed”. But, even passing all tests, my grade still 80/100.

Is someone else having this problem?

3 Likes

Having the same problem here!

2 Likes

I have this in Exercice 5 (forward prop). Maybe it’s because the result does not match the expected output?
image

1 Like

Hey @MoonRat and @tiagobalmeida , please don’t just look at the “all tests passed errors”. Look at the values you get and what the expected values are.

1 Like

Having the same problem. It is saying test failed on 1 cell. But I couldn’t find it.

6 Likes

yeah, same problem in my submission also :frowning:

4 Likes

Well, note that there were some typos in this notebook that just got fixed in the last 24 hours. Some of the “Expected Values” shown in the notebook were not correct. Please make sure you have the latest version.

But @Mubsi’s point is also an important one: you can never assume that just passing the test cases in the notebook is enough to satisfy the grader. There are lots of types of mistakes that are hard to catch with one test (e.g. referencing global variables or hard-coding the answers).

5 Likes

Having the same issue as above but can completely understand the sentiment that passing tests != passing the grader.

However, the grader message is extraordinarily vague:
‘Tests failed on 1 cell(s)! These tests could be hidden. Please check your submission.’

This makes it pretty difficult to debug. I will note that I (like most learners, probably) am definitely not using global variables. There are, however, exercises where we are explicitly told to hard-code values.

2 Likes

I totally agree that just saying “Test failed on 1 cell” and not even telling you which one failed is not, how shall we say, “helpful”. We have complained vigorously about this, but apparently this is a limitation of the Coursera grader platform.

2 Likes

Hey @jeffreywang, could you DM me your notebook ? I shall take a look. Thanks.

1 Like

Hi there, I’m having the same problem, also stuck at 80/100 despite all tests passing.

My suspicion is that this line commented out on Exercise 3 is the issue (since the test exercise isn’t being run), but uncommenting and passing in parameters didn’t seem to work either.

Has this issue been resolved for others who have had it?

David

Hello All, I seem to be having a similar issue. In exercise 3, I get this feedback in the notebook:

There is a mistake in the backward propagation! difference = 1.0

Which is in consistent with the next notebook panel:

Congrats, the difference is smaller than the 10e−7 threshold. So you can have high confidence that you’ve correctly computed the gradient in backward_propagation() .

Also, here is the similar autograder error:

Tests failed on 1 cell(s)! These tests could be hidden. Please check your submission.

Thanks

That probably means your calculation of the difference is incorrect. There is no scenario in which the difference value would come out as 1.0. Please compare your implementation of the difference with the math formulas in the text. Note in particular that the operation in the denominator is a sum, not a difference.

Thanks for the feedback. I had flipped definitions between theta_minus and theta_plus. The difference of 1.0 makes a bit of sense. Everything works as expected now. :sweat_smile:

Hi. I am also stuck at 80/100.

As far as I can tell, there are no obvious issues with the notebook
(all tests passed, all numerical values match the expected values).

Edit: After renewing workspace, and submitting step-by-step I noticed that 80% were reached before the cost function (at that point it was explicitly mentioned in the grader output). Once I have completed the cost function (and obtained the expected result), the grader stayed at 80% and mentioned a hidden cell.

With the message
“[ValidateApp | INFO] Validating ‘/home/jovyan/work/submitted/courseraLearner/W3A1/Tensorflow_introduction.ipynb’
[ValidateApp | INFO] Executing notebook with kernel: python3
2021-09-30 13:55:12.347222: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library ‘libcudart.so.10.1’; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory
2021-09-30 13:55:12.347262: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2021-09-30 13:55:13.561545: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library ‘libcuda.so.1’; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2021-09-30 13:55:13.561585: W tensorflow/stream_executor/cuda/cuda_driver.cc:312] failed call to cuInit: UNKNOWN ERROR (303)
2021-09-30 13:55:13.561612: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (ip-10-2-110-210.ec2.internal): /proc/driver/nvidia/version does not exist
2021-09-30 13:55:13.561843: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-09-30 13:55:13.588434: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2999995000 Hz
2021-09-30 13:55:13.590723: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55fddeeb6af0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-09-30 13:55:13.590756: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
[ValidateApp | ERROR] Timeout waiting for execute reply (30s).
[ValidateApp | ERROR] Interrupting kernel
Tests failed on 1 cell(s)! These tests could be hidden. Please check your submission.”

1 Like

As an update. After removing the superfluous “reshape” in the cost function, based on the advice from a mentor, the grade went up to 100/100.
The numerical output remained the same.

5 Likes

Thank you for the update, @Irina_Malkin_Ondik.

Good luck with the next course :slight_smile:

1 Like

After rewriting my cost function, the score went from 80 to 100.

1 Like

The reply I got was like this, does it indicate that platform detection has more strict requirements on the execution time of code

1 Like

Sorry, but all those earlier messages are normal, even though they sound scary. The only one to worry about is the last line:

Tests failed on 1 cell(s): These tests could be hidden.

That means there is a bug in one of your functions. Unfortunately the grader is not able to tell you which one, for some reason that remains a mystery. The best I can suggest is to do “Kernel → Restart and Clear Output” and then do “Cell → Run All” and then scan through the entire notebook to make sure all the tests pass. If they all pass, then the next level of analysis is to look for ways in which you might have “hard-coded” something (dimensions) or referenced any global variables from within the body of your functions. The classic error of that sort is to reference the global variable that is passed in as a particular parameter value instead of using the “formal parameter” declared in the function definition.

2 Likes