How do I submit the previous version of Programming exercise in Week 3?

The Programming exercise was modified by Coursera on 7/23/2021. I had finished the work on the previous one. How do I submit it? I want to submit the previous one before the deadline.
Tensorflow_introduction_2021_07_23_23_03_44 ? It doesn’t have a submit button.

Let me also say that I did most of the new programming exercise as well, BUT there are errors or bugs in the new programming exercise that coursera introduced so it can’t be submitted.
I wish you had refrained from doing this as the deadline is on the 26th.

Don’t worry about the deadlines: they are fake. There is no penalty for missing them.

The solution to the other problem is that the grader only knows how to grade the “official” name of the notebook. So you have to rename the new version out of the way and then rename your previous version to have the name that is opened by the “Work in Browser” link. Then you should have a “Submit Assignment” button in the notebook that you previously completed.

1 Like

Thanks for this Paul
. I submitted the one I worked on yesterday and I got a 60 as a grade. I don’t see anything wrong with my code at all. It passed all the tests. Any idea how this works?
Best regards.
GP

The grader has different tests than those in the notebook. There is no guarantee that the notebook tests catch all possible bugs. So if the grader rejects your code, there is most likely a problem that you haven’t found yet.

It hard to find the presence of bugs when there is no information.

What is the error or grader output that you get? Does it say anything besides the 60/100 score? Usually there should be more information under the “Show grader output” link.

Yes I was trying to tease out the errors and most seem to be related to Nvida Cuda driver.
I may e wrong, but in my limited experience on cuda , the program would run in native mode without cuda unless the libraries and env is explicity set to run on Cuda engines.

Here is the output… AND thanks
[ValidateApp | INFO] Validating ‘/home/jovyan/work/submitted/courseraLearner/W3A1/Tensorflow_introduction.ipynb’
[ValidateApp | INFO] Executing notebook with kernel: python3
2021-07-24 04:32:47.806948: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library ‘libcudart.so.10.1’; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory
2021-07-24 04:32:47.806990: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2021-07-24 04:32:48.942081: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library ‘libcuda.so.1’; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2021-07-24 04:32:48.942123: W tensorflow/stream_executor/cuda/cuda_driver.cc:312] failed call to cuInit: UNKNOWN ERROR (303)
2021-07-24 04:32:48.942150: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (somehost): /proc/driver/nvidia/version does not exist
2021-07-24 04:32:48.942442: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-07-24 04:32:48.949762: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2999995000 Hz
2021-07-24 04:32:48.951435: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f5b4c9a1620 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-07-24 04:32:48.951467: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
[ValidateApp | ERROR] Timeout waiting for execute reply (30s).
[ValidateApp | ERROR] Interrupting kernel
Tests failed on 2 cell(s)! These tests could be hidden. Please check your submission.

All that stuff about Cuda is irrelevant. It is somehow “normal” but for whatever reason they cannot suppress those messages. The grader environment is provided by Coursera and the course staff has to work within its constraints. The only useful information is that you failed two tests, but they apparently can’t even tell you which ones. Sigh. Sorry, I will contact you by DM in half an hour or so to see how we can proceed.

Hey @paulinpaloalto and @Gian , the assignment was revised in several places, hence the autograder was also updated accordingly. Simply renaming the old file will not work as the new autograder expects solution according to the newer version.

@Gian, it seems like an inconvenience, but the old assignment had some subtle, but important bugs in it. @paulinpaloalto knows, as he helped us in finding those out. It is highly recommended @Gian that you work and submit the newer version of the assignment.

As for the errors in the newer version, @Gian, comment below what those error are, and tag me in every related post you make. I’ll take a look then.

Thanks,
Mubsi

@Mubsi: the “Expected Values” shown in the Forward Propagation section are wrong. I claim they should be:

tf.Tensor(
[[-0.13430887  0.14086473]
 [ 0.21588647 -0.02582335]
 [ 0.7059658   0.6484556 ]
 [-1.1260961  -0.9329492 ]
 [-0.20181894 -0.3382722 ]
 [ 0.9558965   0.94167566]], shape=(6, 2), dtype=float32)

@mubsi: The expected value shown in the “Compute Cost” section is incorrect. I get a 32 bit value. Not sure if that is correct, but here’s what I get:

tf.Tensor(0.4051435, shape=(), dtype=float32)

Hey @paulinpaloalto, it would be helpful if you report them as issues.

Hey @Gian and @paulinpaloalto , I have updated the assignment’s expected output. Please check now.

Thanks @Mubsi but nope. I still get an error in the cost function . I checked onehot and that’s working. I checked categorical cross entropy tf.reduce_mean(tf.keras.metrics.categorical_crossentropy( etc..)

and the logits, labels are both shape(6,2) which seems correct for the internal test.
The error I get is the value and type of the output are wrong.
tf.Tensor(0.147125, shape=(), dtype=float32)


AssertionError Traceback (most recent call last)
in
17 print("\033[92mAll test passed")
18
—> 19 compute_cost_test(compute_cost, new_y_train )

in compute_cost_test(target, Y)
13 print(result)
14 assert(type(result) == EagerTensor), “Use the TensorFlow API”
—> 15 assert (np.abs(result - (0.25361037 + 0.5566767) / 2.0) < 1e-7), “Test does not match. Did you get the mean of your cost functions?”
16
17 print("\033[92mAll test passed")

AssertionError: Test does not match. Did you get the mean of your cost functions?

Expected output

tf.Tensor(0.8419182681095858, shape=(), dtype=float64)

Tried everything and no go.
Best and Thanks
GP

There are 6 classes in the test case for compute_cost. The instructions are pretty clear that the format of labels and logits needs to be (number of examples, number of classes). So that would be 2 x 6 in this instance, right?

4 Likes

Sorry yes. I did use tf.transpose for both logits and labels correcting their shape to (2,6) .

Thanks! It works now. After, I transposed the logit and labels I still had a bug in cost that was left over from an experiment from the old code.
from_logits=False is wrong should be True
Thanks @Mubsi and @paulinpaloalto !

5 Likes

@Gian thanks for your effort and for this from_logits=True. I personally felt PyTorch will be handier than TF.

1 Like

thank you for this, but how could we guess from_Logits=True? it is set to false by default/ tks

Yes, it is a problem that I don’t think Prof Ng explains this anywhere nor do they specifically address this point in the instructions in the assignment. But notice how we defined the forward propagation function earlier: in the last (output) layer, we only do the linear activation and do not include the actual non-linear activation function which should be softmax in this case, right? That’s the clue: so we are passing just logits to the loss function. Prof Ng will always use this method from this point forward, anytime we are using TensorFlow. The reason is that it is both more efficient and more mathematically stable to combine the calculations of the output activation and the loss within the loss function. As one example, it’s easier to handle the case of “saturation” of the outputs where a sigmoid or softmax output rounds to exactly 0 or 1. If you don’t catch that case, the loss becomes NaN.