C4W2: Grading issues

I am currently enrolled in Course 4: Natural Language Processing with Attention Models, and I have encountered a perplexing issue with Assignment 2: Transformer Summarizer. Despite passing all the tests for the exercises, the grading system has awarded a score of 0 for each exercise. Additionally, I have noticed several console errors in my browser when attempting to submit the assignment.

Details of the issue:

Course: Natural Language Processing with Attention Models
Assignment: Exercise 2 Transformer Summarizer
Problem: All tests passed but received a score of 0 for all exercises. Browser console errors are present during submission.
I have already followed standard troubleshooting steps, such as clearing the cache, trying different browsers, and ensuring that my internet connection is stable. I’m reaching out to the community to see if anyone else has faced a similar issue or if there are suggested steps that I may have overlooked.

If any of the course instructors or technical staff could provide guidance or assistance, it would be greatly appreciated.

What is the specific feedback from the grader?

When you pass the tests in the notebook but fail the grader, it usually means that your code is not general in some way. If you get 0 for all sections from the grader, it usually means that whatever the “non-generality” problem is translates to the notebook not being valid interpretable python. If the grader can’t even run the code, then you get 0 for everything. As Tom says, it would be useful to see the full text of the grader output that you see by clicking “Show Grader Output”.

The types of errors that I mean by “non-generality” are things like hard-coding assumptions about the dimensions of the inputs or referencing global variables from the local scope of your functions.

I appreciate your prompt responses and guidance!

The specific feedback I received from the grader is as follows: “There was a problem compiling the code from your notebook. Details: Exception encountered when calling layer ‘softmax_3’ (type Softmax). {{function_node _wrapped__AddV2_device/job:localhost/replica:0/task:0/device:CPU:0}} Incompatible shapes: [1,2,2,150] vs. [1,1,1,2] [Op:AddV2] name: Call arguments received by layer ‘softmax_3’ (type Softmax): • inputs=tf.Tensor(shape=(1, 2, 2, 150), dtype=float32) • mask=tf.Tensor(shape=(1, 1, 1, 2), dtype=float32)”

I appreciate your help!

Regarding the non-generality problem, I understand that there might be a dimension mismatch or an inappropriate global variable reference. I’ve rechecked my code for hard-coded dimensions or global references but couldn’t find any apparent issues. The error seems to suggest a shape incompatibility between tensors during an operation in the ‘softmax_3’ layer.

The specific feedback I received from the grader is as follows: “There was a problem compiling the code from your notebook. Details: Exception encountered when calling layer ‘softmax_3’ (type Softmax). {{function_node _wrapped__AddV2_device/job:localhost/replica:0/task:0/device:CPU:0}} Incompatible shapes: [1,2,2,150] vs. [1,1,1,2] [Op:AddV2] name: Call arguments received by layer ‘softmax_3’ (type Softmax): • inputs=tf.Tensor(shape=(1, 2, 2, 150), dtype=float32) • mask=tf.Tensor(shape=(1, 1, 1, 2), dtype=float32)”

Ok, I don’t know the details of that assignment, but the error message seems pretty clear: you have mismatching tensor shapes. So if that error did not occur when you run the notebook, then it must some form of the “non-generality” problem that I discussed. Please take another look. If you can’t find it, then we’ll have to go to “plan B” and look at your code. Since I’ve never looked at that assignment before, that might take a while and my day is pretty heavily scheduled today.

I will send you a DM about how to share the code, but the hope is that one of the actual NLP mentors will notice this thread and have a better answer than I can give.

Hello Sir, I had the same problem with the assignment. One mentor checked for me, but he couldn’t find the error. I had tried a lot of and it still not worked