I passes all tests on notebook, but when I submit it for grading, I got errors. Can you help me for that? I believe that this is a technical glitch. Several times I lost all my assinment and the system reset it (this assignment as well, I had to do it twice).
Now, my assigment is now all reset to original one and I lost all my assignment again, this happens a lot in NLP assignments. It is very annoying, I have to redo it everything again.
This time I save the assignment as a backup, that is also gone
I finished again scaled_dot_attention, and submitted grade to make sure that at least scaled_dot_product_attention is grade, now I got this error
‘NoneType’ object is not a callable,
As you see on the left column in the screen that I posted above, all tests passed.
I don’t want to share the code, but passing tests on local run and failing in grade never happent to me.
in dlai platform, when I do assignment I make sure after everytime I write a code for an exercise I save my work by clicking on emicon save, that helped me lot for this restarting of assignment from scratch.
passing a unittest never confirm that all your codes are as per autograder. there can be conflict of variable recall, hard coding implementation difference.
so in case you want to resolve the issue, click on my name, then message me screenshots of all the grade function codes to review your code first as a first checkpoint in such scenario.
Then we can move if codes are fine why your assignment is failing on submission.
Also just to be sure, before submission did you click on save and then submit?
Also letting you know, this is a know practice when we make change in codes after failed submission, the standard practice is delete/clearing out of kernel output, restarting kernel. Re-run the cells individually from starting till end. Again save the work by clicking on and then submit. Sometimes this has also resolved such issue if the codes are as per autograder.
scale matmul_qk with the square root of dk
in this step instead of using tf.divide, use operand / for calculating scaled attention logits
for code line
softmax is normalized on the last axis (seq_len_k) so that the scores add up to 1.
It is already mentioned. softmax is normalised to last axis, so you do not need to use axis=-1 to calculate attention weights.
If you still continue to encounter same feedback, in my previous response here I clearly asked you send me screenshot of your grade function codes and not error.
The error output in grader feedback gives an overall general feedback after reviewing your whole notebook and not exercise based, as one error must have caused subsequent error in another exercise test cell.
I understand resolving a code issue can be pain-staking, but as a volunteer mentor I can only check your codes first and then inform staff, that there might be issue, so when I asked for code screenshots I wanted to review all your codes.
I have send you a detailed screenshot of code issues which could have resulted this grader error feedback by personal DM.
please review the screenshots, make the corrections and submit after getting first a fresh copy.
Your next_word code cell had incorrect input for decoder as output where as you were suppose to use encoder_input, as input of both encoder and decoder were same.
Another issue in the same cell, training was suppose to be set to False but you had set to True, resulting in mismatch in your output and expected output.
Other corrections aren’t really a major concern but just to make sure it is as per the auto grader I have mentioned.