Hi all, I’m stuck in Ex6. All the previous functions run fine, but this one fails, so I don’t think I’m dragging an error around, it must come from this exercise?
I’ve tried to find the error but can’t see it. Could someone look at my notebook and help? Thanks!

What this error suggests is that the test function expects a tuple of int and float, as the function next_symbol() returns:

### END CODE HERE ###
return symbol, float(log_probs[symbol])

So the symbol should be of type int - make sure that after using tl.logsoftmax_sample(.., temperature=..) method, you wrap it around with an int().

Also the problem could lie in the trickiest part of the exercise - the log_probs. Hints suggests:

The log probabilities output will have the shape: (batch size, decoder length, vocab size). It will contain log probabilities for each token in the cur_output_tokens plus 1 for the start symbol introduced by the ShiftRight in the preattention decoder. For example, if cur_output_tokens is [1, 2, 5], the model will output an array of log probabilities each for tokens 0 (start symbol), 1, 2, and 5. To generate the next symbol, you just want to get the log probabilities associated with the last token (i.e. token 5 at index 3). You can slice the model output at [0, 3, :] to get this. It will be up to you to generalize this for any length of cur_output_tokens.

In other words, the output from the model will be (batch size, decoder length, vocab size) and you need to figure out values for each of these. Hint: for the batch_size it is 0, for the decoder_length is should be your token_length (before padding) and for the vocab_size you should get all the values :

If both of the above are correct, you could private message me your Assignment notebook and I would take a look at it.

Hello- I am having a somewhat similar problem to the poster above.

Here is my output from the test function, including some additional print statements at the end of my next_symbol function that show that the function is indeed producing a tuple, of length 2, with an int and a float, and whose values match the expected output from the test function (as seen in the original poster’s image):

I can see from your screenshot that you changed what the function should return:

### END CODE HERE ###
return symbol, float(log_probs[symbol])

Note:
Before submitting your assignment to the AutoGrader, please make sure you are not doing the following:

You have not added any extraprint statement(s) in the assignment.

You have not added any extra code cell(s) in the assignment.

You have not changed any of the function parameters.

You are not using any global variables inside your graded exercises. Unless specifically instructed to do so, please refrain from it and use the local variables instead.

You are not changing the assignment code where it is not required, like creating extra variables.

Yes. The commented out print statements show that the output of the function is a tuple of length two, where the first element (symbol) is of class ‘int’ and the second element is a ‘float’. These print outputs can be seen in the image from my first post.

I was able to solve the issue by looking at examples on another thread related to this function. I think the problem arose from a funny way that I was shaping the padded_with_batch array.

In # UNQ_C2 you forgot to pass correct mode to tl.ShiftRight

In # UNQ_C3 mask preferably should use !=

Bigger mistakes - you use global variable model instead of NMTAttn, and you also use a print statement - read the important points at the top of the Assignment):

In # UNQ_C6 you incorrectly defined padded_with_batch - hints: use np.array() to convert padded variable from list to numpy array; and right after that to add empty first dimension use [None, :]

Please remove your solution notebook from this forum, because it is against the rules.

I’m not able to pass the tests for UNQ6. I have passed the prior UNQ’s and have followed the instructions to define the log_probs. However I get a different output token and probability than expected. Please help. @arvyzukai