C4W1_Assignment - Exercise 5

Hi

I’m currently working on a TensorFlow project and have encountered several challenges while implementing the translate function, as per the assignment instructions. I would greatly appreciate your guidance on the following issues:

  1. Tokenizing Input Text:

    • I attempted to use tf.lookup.StaticVocabulary for tokenization but encountered attribute errors.
    • I also faced issues while splitting the text tensor directly and with other tokenization approaches.
  2. Decoder State Initialization:

    • I tried initializing the decoder states as zero tensors, but this resulted in shape/type mismatches.
    • I’m unsure about the correct method to properly create the initial state of the decoder.
  3. Iterating the Decoding Loop:

    • The decoding loop either does not run or fails to provide the next token predictions.
    • It seems there might be an issue with how I’m calling the decoder.

Your expert advice on the correct way to initialize/load vocabulary for tokenizing the input sentence, how to properly create decoder state tensors, and how to iterate the decoding loop by calling the decoder model would be immensely helpful.

Kind regards,

1 Like

Hi @RyeToast

Note the cell above the Exercise 5 - it almost exactly shows what steps you should take (which functions you need). But note that it’s not exactly the same - use the hints in the graded exercise.

Cheers

1 Like

I’ve encountered in my TensorFlow project, specifically related to the implementation of the translate function. Despite following the guidelines and examples provided.

Issue Description:
I’m working on implementing a translate function as part of a neural machine translation model. The function is designed to process a sentence, vectorize it, encode it using the model’s encoder, set the initial state of the decoder, and iteratively generate predictions for the next token. However, I’m encountering a NameError during the vectorization step.

Error Details:
The specific error message I receive is as follows:

NameError Traceback (most recent call last)
Cell In[34], line 6
      3 temp = 0.0 
      4 original_sentence = "I love languages"
----> 6 translation, logit, tokens = translate(trained_translator, original_sentence, temperature=temp)
      8 print(f"Temperature: {temp}\n\nOriginal sentence: {original_sentence}\nTranslation: {translation}\nTranslation tokens:{tokens}\nLogit: {logit:.3f}")

Cell In[33], line 10, in translate(model, text, max_length, temperature)
      7 texts = tf.convert_to_tensor([text])[tf.newaxis]
      9 # Vectorize the text using the correct vectorizer
---> 10 vectorized_text = your_text_vectorization_layer(texts)  # Replace with your actual text vectorization layer or function
     11 context = model.encoder(vectorized_text)
     13 # Set the initial state of the decoder

NameError: name 'your_text_vectorization_layer' is not defined

This error occurs when I try to vectorize the input text using a function or layer that is supposed to transform the text into a format compatible with the model.

Attempts to Resolve:

  • I’ve tried various approaches to vectorize the input text as per the assignment instructions and the examples provided in the course materials.
  • I’ve also reviewed the related discussion board posts and attempted to implement the suggested solutions, but the error persists.

Request for Guidance:
Could you please provide guidance on the following aspects:

  • The correct method for text vectorization in the context of this model.
  • How to integrate the text vectorization process with the translate function to ensure compatibility with the model’s encoder.
  • Any insights or specific steps to address the NameError and ensure the function operates as intended.
1 Like

Addendum: TypeError in Translate Function Implementation

I wanted to add a point to my previous message regarding an issue I’ve encountered in the translate function of my TensorFlow project.

New Issue - TypeError:
After implementing the function based on your initial guidance, I’m now encountering a TypeError:

TypeError: translate() got an unexpected keyword argument 'temperature'

This error occurs when I call the translate function with a ‘temperature’ argument. My understanding was that this argument is used to control the randomness of the token predictions during the translation process. Here’s a summary of the steps I followed based on your guidance and course materials:

  • Converting the input text to a tensor.
  • Vectorizing and encoding the text using the model’s encoder.
  • Initializing the state of the decoder.
  • Implementing the decoding iteration loop.

Seeking Further Clarification:
Despite these efforts, the issue persists, leading me to seek your further guidance on the following:

  • Should the translate function definition include ‘temperature’ as a parameter? If so, how should it be integrated?
  • How does the decoding process interact with the ‘temperature’ hyperparameter, and what role does it play?
  • Are there any modifications needed in the decoding loop logic to appropriately handle the ‘temperature’ parameter?

Any specific guidance or sample code that can help resolve this error and clarify the proper use of the ‘temperature’ argument would be immensely valuable.

I’m eager to understand where I might be going wrong and to learn from this experience.

1 Like

Hi @RyeToast

I already told you this, but I have to reiterate it again - do not modify code outside:

    ### START CODE HERE ###
...
    ### END CODE HERE ###

The only place you should implement your solutions is between these tags.

So, the error:

most probably is because you changed the translate() functions parameters (the temperature=0.0 in particular)

Your understanding is correct.

you need this parameter when you call generate_next_token() function, whose one parameter is temperature.

Decoding process does not interact with the temperature at the model level, it is important how model’s predictions are used (0.0 - means the greedy decoding - choosing the most probable (according to model) token every time, higher temperature means more “creativity” - sometimes choosing not the most probable token but maybe second most probable maybe third or other token). You can see how exactly it is used - the generate_next_token() function.

No, looping does not directly depend on the temperature parameter.

1 Like

You don’t have the your_text_vectorization_layer - this is the reason for your error. In your case, your text vectorization layer is english_vectorizer and furthermore it needs to be converted to tensor (as I said, look at the cell above the exercise).

1 Like

Thank you for your detailed response and for clarifying the issues with my implementation. I greatly appreciate the time you’ve taken to guide me and help me understand where I was going wrong.

You’re right – modifying anything outside the designated START/END tags, especially the translate function definition, was a mistake on my part. I now understand the importance of adhering strictly to the assignment instructions and will ensure to follow this approach in the future.

Your insights on how the decoding process interacts with the temperature parameter, and the role of the generate_next_token function, have significantly enhanced my understanding of the model’s workings.

Regarding the text vectorization issue, I now recognize the necessity of using the provided english_vectorizer rather than assuming a custom function. This clarification has been instrumental in resolving my confusion.

I sincerely apologize for any frustration caused by my oversight and am grateful for your patience. Your guidance is invaluable to my learning journey. For my next steps, could you advise if there are specific aspects of the translate function I should focus on or revisit, considering my earlier misunderstandings? I’m eager to apply your feedback correctly and improve my implementation.

Again, Thank you for your assistance and being a knowledgeable and patient instructor. Your support is greatly appreciated.

2 Likes

I’m reaching out for guidance on implementing the translate function in our TensorFlow project. Despite following the instructions closely and ensuring modifications only within the START/END tags, I’ve encountered an AttributeError related to the english_vectorizer.

Issue Description:

  • When attempting to vectorize the input text using model.english_vectorizer, I receive an AttributeError, indicating that this attribute does not exist in the Translator model.
  • I’ve focused on ensuring the input text is properly tokenized and formatted for the encoder.

Specific Points of Clarification:

  • Could you guide me on the correct use of english_vectorizer for text vectorization, especially since it doesn’t seem to be a direct attribute of the Translator model?
  • How can I ensure that the encoder is correctly integrated with the vectorized input?
  • Is there a specific approach to incorporate an external text vectorization method within the structure of the translate function?

Your insights or any additional examples on properly handling text vectorization and integrating it with the encoder would be immensely helpful.

Thank you for your assistance. :pray:t5:

1 Like

Hi @RyeToast

I appreciate your detailed questions which make advising smoother.

You’re correct - it’s not the part of Translator model. english_vectorizer is provided for you - note the very first cell of the Assignment:

from utils import (sentences, train_data, val_data, english_vectorizer, portuguese_vectorizer, 
                   masked_loss, masked_acc, tokens_to_text)

An example how to use it is also provided in the cell above the exercise:
context = english_vectorizer(texts).to_tensor()

Again, the cell above shows how to the encoder can use the vectorized input:
context = encoder(context)
but note, that in your exercise you should use the model.encoder(context) since here the encoder is part of the Translator class.

I’m not sure I completely understand the question. If I understand correctly, “the external vectorization method” you are talking about is the english_vectorizer and I showed you that it is perfectly normal to be used inside the function (the same way you use other global variables like tf. or np. or list or any other.

Cheers

1 Like

Hello.

I’m reaching out seeking guidance with an InvalidArgumentError in my TensorFlow translation model’s attention mechanism.

The specific error is:

InvalidArgumentError: Expected dimension 128 at axis 0 but got dimension 256 [Op:Einsum]  

This appears to be caused by a dimension mismatch between the LSTM units I set to 256 and an expected dimension of 128 in the MultiHeadAttention layer.

I would greatly appreciate your insights on:

  1. Configuring the attention layer dimensions to properly match the LSTM layer shape

  2. Ensuring the query, key, value tensors in attention have compatible dimensions

Despite my efforts to follow previous guidance, this issue has persisted. Any pointers on the right approach to coordinate the LSTM and attention mechanisms would be invaluable.

1 Like

Addendum:

Additionally, I wanted to highlight some specific challenges faced with implementing the translate function itself:

  • Utilizing the english_vectorizer for input text
  • Explicitly converting any ragged tensors to dense formats
  • Initializing the decoder state properly before decoding loop

While I’ve tried addressing these based on our prior discussion, I’m still facing some blockers:

  1. Accessing encoder/decoder dependencies within the required code tags

  2. Correctly running the iteration to generate translated tokens

I sincerely apologize for reaching out about these recurring issues again. I would greatly appreciate any insights you can provide on:

  • What is the right way to access trained model components within graded function scopes?

  • Are there any examples you can share on iterating decoder calls to translate text?

1 Like

Hello, @arvyzukai

I’ve been working hard on implementing the English-to-Portuguese translate function as part of the machine translation model assignment.

I’ve tried encoding the input text using the provided english_vectorizer, initializing the decoder state, and running the decoding loop to generate translated tokens. However, I’m still running into issues that have been blocking progress for some time now.

I want to apologize for the multiple messages related to this function - but would sincerely appreciate any additional guidance you can offer to help identify my knowledge gaps.

Some key challenges faced:

  • Correctly processing and encoding the input sentence
  • Iteratively generating tokens using the decoder, managing state updates, and EOS conditions

I’ve reviewed the model architecture and previous exercises but can’t pinpoint the root cause. Could you provide guidance or examples, particularly on integrating the encoder with the decoder and handling token generation in the translate function?

I welcome any pointers to resources, documentation, or tutorials covering encoder-decoder models and inference concepts at a remedial level if you feel that could help solidify foundations.

1 Like

Hi @RyeToast

Forgive me for not answering your questions straight away. I hoped you will be able to complete this exercise on your own since the most of it is almost identical to the cell before the exercise.

So, if you have a fresh copy of the exercise, here are the steps (without revealing code) you need to make:

    ### START CODE HERE ###
    
    # PROCESS THE SENTENCE TO TRANSLATE
    
    # Convert the original string into a tensor
    text = 

Here the code is almost identical to the cell above after this comment:

# Convert it to a tensor

the only difference is that there we used eng_sentence, while in your code you would use text.


Second line asks you:

# Vectorize the text using the correct vectorizer
    context =

Is again almost identical to the cell above after the comment:

# Vectorize it and pass it through the encoder

The difference is that there the variable is texts while here we use text (singular).


For the third line you have to implement:

    # Get the encoded context (pass the context through the encoder)
    # Hint: Remember you can get the encoder by using model.encoder
    context = 

This too is almost identical to the cell above, except what the hint asks you to pay attention to.


Now, after you processed the text, next step is to initialize the decoder and the loop.
So, for the fifth line you have:

    # INITIAL STATE OF THE DECODER
    
    # First token should be SOS token with shape (1,1)
    next_token = 

This line is completely identical to the cell above after this comment:

# Next token is Start-of-Sentence since you are starting fresh

For the sixth line you have:

    # Initial hidden and cell states should be tensors of zeros with shape (1, UNITS)
    state = 

This again is almost identical to the:

# Hidden and Cell states of the LSTM can be mocked using uniform samples

Except that in the cell above the initial state was tf.random.uniform while here you need tf.zeros.


For the 7th line, you have:

    # You are done when you draw a EOS token as next token (initial state is False)
    done = 

It should be completely identical to the cell above:

# You are not done until next token is EOS token

Now comes the loop part where we actually start to generate the translation. For this you have:

    # Iterate for max_length iterations
    for None in None(None):

As the code hint suggest, this loop should iterate at most max_length times. To achieve that you can use range() function.


For the 9th line (or multiple lines, depends how you count), inside the loop, you have:

        # Generate the next token
        next_token, logit, state, done =

This line again is almost identical to the cell above after the comment:
# Generate next token
Except that here, the decoder is attribute of the model - model.decoder and we do not use fixed temperature value, but we have temperature as a parameter.


For the 10th line you have:

        # If done then break out of the loop
        if None:
            None

Here we should check the done variable, and to break out of the loop we use break Python statement.


For the 11th line you have:

        # Add next_token to the list of tokens
        None.None(None)

Here we have a list (tokens) initialized for us at the beginning, which we should append with the generated token (next_token).


For the 12th line you have:

        # Add logit to the list of logits
        None.None(None)
    
    ### END CODE HERE ###

Here, again, we have a list (logits) initialized for as at the beginning, which we should append with the logit output (logit).


These are the steps needed for you to implement in order to make translation happen. Ask if any of those is not clear to you.

Cheers

1 Like

Hello @arvyzukai

I hope this message finds you well. I find myself in need of reaching out for your assistance once more regarding the implementation of the English-Portuguese translate function. Despite diligently applying the valuable feedback you’ve previously provided and exploring the suggested resources, I encounter ongoing challenges.

In my efforts to bridge the understanding gap, I’ve:

  • Leveraged english_vectorizer for handling text input, aiming for a robust preprocessing step.
  • Attempted to convert ragged tensors to dense ones, to ensure consistency in tensor shapes as per model requirements.
  • Carefully initialized decoder state tensors, to set up the initial state of the decoder correctly.

Furthermore, in alignment with your recommendations, I’ve revisited Exercise 4, which focuses on the inference walkthrough, hoping to glean insights into the correct application of these concepts during the inference phase.

Despite these endeavors, I struggle with a fundamental comprehension of how to correctly interface trained model components, such as the encoder and decoder, during inference for translation tasks. My primary stumbling blocks appear to be in the areas of:

  • Understanding the usage patterns of trained encoder/decoder networks for sequence-based tasks.
  • Appropriately shaping tensors and state variables to ensure compatibility with the model.
  • Effectively iterating through decoders for text translation, to generate coherent and accurate translations.

I would be immensely grateful for any further guidance you could offer on these matters. Whether it be through code snippets that illustrate best practices, links to documentation or tutorials that delve into these topics, or simplified examples that demonstrate the principles in action, any additional support would be invaluable.

I’m committed to overcoming these hurdles and completing this exercise with your esteemed guidance.

Thanks :pray:t5:

1 Like

Addendum

I’ve taken steps to meticulously apply the suggestions and hints you outlined.

Following your advice, I’ve:

  • Converted the input text into a tensor, closely mirroring the approach used in the exercise preceding mine.
  • Utilized the correct vectorizer to process the sentence, ensuring that I followed the pattern set in the earlier cell for vectorizing and encoding the text.
  • Initialized the decoder’s first token as the SOS token and set the initial hidden and cell states as zeros, adhering to the instructions and making sure to apply the correct tensor shapes and types.
  • Carefully implemented the loop for generating the translation, taking into account the use of model.decoder without a fixed temperature value, to dynamically generate tokens based on the context.

Despite these efforts and closely following the provided steps, I’m encountering persistent challenges that seem to stem from interfacing the trained model components (encoder and decoder) correctly during inference and ensuring the tensors and state variables are shaped and managed appropriately for model compatibility.

I believe I’ve adhered to the instructions as closely as possible, but there may still be nuances or critical details I’m missing. Could we possibly go over where my implementation might be diverging from the expected approach?

1 Like

Hi @RyeToast

This time ChatGPT went over the top :slight_smile: in helping with your posts, and I am having difficulty understanding your question. Please provide a shorter version for clarity.

Cheers

1 Like

Hi.

The issue occurs at a stage where the input text is supposed to be processed for translation. I’ve attempted to follow the steps outlined for preparing the text, which includes converting it into a suitable format for the model. However, I’m facing an error related to the tokenization of the input text, which seems to indicate a missing component or method in my approach.

I’ve done several iterations but this one seems a good place to start since it has the least amount of errors:

NameError                                 Traceback (most recent call last)
Cell In[35], line 6
      3 temp = 0.0 
      4 original_sentence = "I love languages"
----> 6 translation, logit, tokens = translate(trained_translator, original_sentence, temperature=temp)
      8 print(f"Temperature: {temp}\n\nOriginal sentence: {original_sentence}\nTranslation: {translation}\nTranslation tokens:{tokens}\nLogit: {logit:.3f}")

Cell In[34], line 23, in translate(model, text, max_length, temperature)
     20 text = tf.constant([text]) 
     22 # Tokenize text to get integer tokens
---> 23 text_tokens = tokenize(text)  
     25 # Generate embeddings 
     26 embedding_vectors = model.encoder.embedding(text_tokens)  

NameError: name 'tokenize' is not defined

This suggests that the method I intended to use for tokenization isn’t recognized. Can I get your advice on:

  • The proper approach to text tokenization in TensorFlow for preparing data for our neural translation model.
  • Any specific TensorFlow functions or best practices recommended for ensuring the text is accurately tokenized and ready for model processing.
1 Like

Hi, @RyeToast

You’re not following the guidelines.

you have:

What is the first line in the cell above Exercise 5? (with code comment: # Convert it to a tensor)

1 Like

Btw, if you want ChatGPT to fill the code for you, your prompt should look like:

Help me fill the code in places of None. The code:
/###
Copy/Paste the Assignment translate function
/###

Make use of this code as a reference:
/###
Copy/Paste the cell above the Exercise 5
/###

But adapt to these instructions:
“”"
Copy/Paste the instructions for Exercise 5
“”"

In my case, ChatGPT completed the exercise perfectly, except it didn’t know that there’s no model.generate_next_token, and instead it should have used generate_next_token

Cheers

2 Likes

first line?

PROCESS SENTENCE TO TRANSLATE AND ENCODE

A sentence you wish to translate

[ code ]

Convert it to a tensor

[ code ]

1 Like