AssertionError: Wrong values in translation

I read all topics related to this exersize and I didn’t find answer. All previous tests were passed.

This is my code. I a bit corrected init method to be sure that in linear the same units number as in softmax. But with it and without is not working both.

{moderator edit: code removed}

and the error is

AssertionError Traceback (most recent call last)
----> 2 Transformer_test(Transformer, create_look_ahead_mask, create_padding_mask)

~/work/W4A1/ in Transformer_test(target, create_look_ahead_mask, create_padding_mask)
286 assert np.allclose(translation[0, 0, 0:8],
287 [0.017416516, 0.030932948, 0.024302809, 0.01997807,
→ 288 0.014861834, 0.034384135, 0.054789476, 0.032087505]), “Wrong values in translation”
290 keys = list(weights.keys())

AssertionError: Wrong values in translation

No, do not do that. The init code is outside of the area you are required to modify.

yes, I know, without it doesn’t work too. I did it not due to easy life :sweat_smile:

should I send you link to lab maybe? I saw that few people had error in previous functions which where passed tests too.

If you had to change the init() code, that means you had a different error somewhere else, and tried to fix it by making a second error.

Double-check your argument to self.decoder. It shouldn’t be the input_sentence.

One more thing. I just opened the updated notebook of this assignment (C5 W4 A1: Transformer) but I didn’t see the self. linear. Is it you who added this term in your assignment?

as I said, yes. Because my code didn’t work. And now not working too. So, what do you offer to do? I am sorry, it was previous edition of code that I typed in topic. So, with output sentence in decoder doesn’t work too.

Copy this code:

class Transformer(tf.keras.Model):
    Complete transformer with an Encoder and a Decoder
    def __init__(self, num_layers, embedding_dim, num_heads, fully_connected_dim, input_vocab_size, 
               target_vocab_size, max_positional_encoding_input,
               max_positional_encoding_target, dropout_rate=0.1, layernorm_eps=1e-6):
        super(Transformer, self).__init__()

        self.encoder = Encoder(num_layers=num_layers,

        self.decoder = Decoder(num_layers=num_layers, 

        self.final_layer = Dense(target_vocab_size, activation='softmax')
    def call(self, input_sentence, output_sentence, training, enc_padding_mask, look_ahead_mask, dec_padding_mask):
        Forward pass for the entire Transformer
            input_sentence -- Tensor of shape (batch_size, input_seq_len, fully_connected_dim)
                              An array of the indexes of the words in the input sentence
            output_sentence -- Tensor of shape (batch_size, target_seq_len, fully_connected_dim)
                              An array of the indexes of the words in the output sentence
            training -- Boolean, set to true to activate
                        the training mode for dropout layers
            enc_padding_mask -- Boolean mask to ensure that the padding is not 
                    treated as part of the input
            look_ahead_mask -- Boolean mask for the target_input
            dec_padding_mask -- Boolean mask for the second multihead attention layer
            final_output -- Describe me
            attention_weights - Dictionary of tensors containing all the attention weights for the decoder
                                each of shape Tensor of shape (batch_size, num_heads, target_seq_len, input_seq_len)
        # call self.encoder with the appropriate arguments to get the encoder output
        enc_output = None  # (batch_size, inp_seq_len, fully_connected_dim)
        # call self.decoder with the appropriate arguments to get the decoder output
        # dec_output.shape == (batch_size, tar_seq_len, fully_connected_dim)
        dec_output, attention_weights = self.decoder(None, None, None, None, None)
        # pass decoder output through a linear layer and softmax (~2 lines)
        final_output = None # (batch_size, tar_seq_len, target_vocab_size)
        # END CODE HERE

        return final_output, attention_weights

Just write your code between these two lines:



No need to add anything else. Don’t need to add any self.linear`.

you updated lab and I lost all yesterday job :sweat_smile:

You can still access to your old Notebook. Click on the Jupyter Notebook logo , then W4A1 and you will see file like C5_W4_A1_Transformer_Subclass_v1_2023_05_31_05_31_20 or any other name. That is your old Notebook. Go ahead and grab your yesterday’s hard work from it.

yes, now is fine. What did you do? Why is it working now? :sweat_smile: I tried to run yesterday with restarting of kernel, but it didn’t help

Your code was wrong. You added unnecessary terms like self.linear.

as I said it didn’t work before too.

By the way, I can’t pass assigment now with error

Cell #3. Can’t compile the student’s code. Error: AssertionError(‘You must return a numpy ndarray’)

All tests are fine

Because there were also some other bugs. If your code didn’t work, it means your written code is not correct. The rest of the code provided to you is tested several times and is correct.

Check what test is in Cell # 3. get_angles_tes? Check that code.

This is the 4th week of the fifth course. This is an opportunity for you to use what you’ve learned about debugging your code.

Yes. And one more thing. Submit the latest Notebook.

so, should I copy and past everything to it?

No. Only your code. The rest of the thing might be changed. But also check if the latest notebook also changed the code stuff.

{moderator edit: reply removed, as it gives advice that should not be followed - based on making incorrect modifications to the code provided in the assignment}