Autograder for the assigment failed with 0 grade twice for this course

Autograder for the assigment failed with 0 grade twice for this course . I was able to run through all the test successfully once before submission. However when i try ro run the tests again now, the kernel restarts consistently when executing the forward_propagation_test

Pasted output from Autograder below:

Filename: nbgrader-part
0/100Score: 0 of 100
Hide grader output
Grader output
[ValidateApp | INFO] Validating ‘/home/jovyan/work/submitted/courseraLearner/W3A1/Tensorflow_introduction.ipynb’
[ValidateApp | INFO] Executing notebook with kernel: python3
2024-03-20 01:30:42.608772: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library ‘libcudart.so.10.1’; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory
2024-03-20 01:30:42.608910: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2024-03-20 01:30:44.373534: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library ‘libcuda.so.1’; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2024-03-20 01:30:44.373567: W tensorflow/stream_executor/cuda/cuda_driver.cc:312] failed call to cuInit: UNKNOWN ERROR (303)
2024-03-20 01:30:44.373591: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (ip-10-2-29-145.ec2.internal): /proc/driver/nvidia/version does not exist
2024-03-20 01:30:44.374372: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-03-20 01:30:44.404554: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2999995000 Hz
2024-03-20 01:30:44.406675: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x559b2ffff860 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2024-03-20 01:30:44.406704: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
[ValidateApp | ERROR] Kernel died while waiting for execute reply.
[ValidateApp | ERROR] Traceback (most recent call last):
File “/opt/conda/lib/python3.7/site-packages/nbconvert/preprocessors/execute.py”, line 478, in _poll_for_reply
msg = self.kc.shell_channel.get_msg(timeout=timeout)
File “/opt/conda/lib/python3.7/site-packages/jupyter_client/blocking/channels.py”, line 57, in get_msg
raise Empty
_queue.Empty

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/conda/lib/python3.7/site-packages/nbgrader/preprocessors/execute.py", line 41, in preprocess
    output = super(Execute, self).preprocess(nb, resources)
  File "/opt/conda/lib/python3.7/site-packages/nbconvert/preprocessors/execute.py", line 405, in preprocess
    nb, resources = super(ExecutePreprocessor, self).preprocess(nb, resources)
  File "/opt/conda/lib/python3.7/site-packages/nbconvert/preprocessors/base.py", line 69, in preprocess
    nb.cells[index], resources = self.preprocess_cell(cell, resources, index)
  File "/opt/conda/lib/python3.7/site-packages/nbconvert/preprocessors/execute.py", line 438, in preprocess_cell
    reply, outputs = self.run_cell(cell, cell_index, store_history)
  File "/opt/conda/lib/python3.7/site-packages/nbconvert/preprocessors/execute.py", line 578, in run_cell
    exec_reply = self._poll_for_reply(parent_msg_id, cell, timeout)
  File "/opt/conda/lib/python3.7/site-packages/nbconvert/preprocessors/execute.py", line 483, in _poll_for_reply
    self._check_alive()
  File "/opt/conda/lib/python3.7/site-packages/nbconvert/preprocessors/execute.py", line 510, in _check_alive
    raise DeadKernelError("Kernel died")
nbconvert.preprocessors.execute.DeadKernelError: Kernel died

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/conda/lib/python3.7/site-packages/nbgrader/apps/validateapp.py", line 72, in start
    validator.validate_and_print(filename)
  File "/opt/conda/lib/python3.7/site-packages/nbgrader/validator.py", line 340, in validate_and_print
    results = self.validate(filename)
  File "/opt/conda/lib/python3.7/site-packages/nbgrader/validator.py", line 311, in validate
    nb = self._preprocess(nb)
  File "/opt/conda/lib/python3.7/site-packages/nbgrader/validator.py", line 290, in _preprocess
    nb, resources = pp.preprocess(nb, resources)
  File "/opt/conda/lib/python3.7/site-packages/nbgrader/preprocessors/execute.py", line 44, in preprocess
    raise UnresponsiveKernelError()
nbgrader.preprocessors.execute.UnresponsiveKernelError

[ValidateApp | ERROR] nbgrader encountered a fatal error while trying to validate ‘submitted/courseraLearner/W3A1/Tensorflow_introduction.ipynb’

Hi @Ashkj ,

It looks like the Kernel has timed out. No report of any system problem that I know of. Could you do a clean run:
kernel->restart and clear all output
cell → run all
If there is no problem reported, then submit your assignment.

I get a popup whith the following message

The kernel appears to have died. It will restart automatically.

Kernel errors can occur when the notebook system runs out of memory, often when users run multiple notebooks at once. Please check the list of running notebooks and shut down any notebooks that you are not using.

i figured it out. My bad.

2 Likes