C4W3_Assignment in Natural Language Processing with Attention Models

I am getting the following errors in Exercise 3 of C4W3_Assignment . I have deleted all the files to get an updated assignment as per @Deepti_Prasad . Still, I am not able to get rid of these errors. Please, someone help me. Only a few days left to complete this course

Errors are the following


TypeError Traceback (most recent call last)
Cell In[35], line 3
1 idx = 10408
----> 3 result = answer_question(inputs_train[idx], transformer, tokenizer)
4 print(colored(pretty_decode(result, sentinels, tokenizer).numpy()[0], ‘blue’))
5 print()

Cell In[34], line 46, in answer_question(question, model, tokenizer, encoder_maxlen, decoder_maxlen)
42 # Loop for decoder_maxlen iterations
43 for i in range(decoder_maxlen):
44
45 # Predict the next word using the model, the input document and the current state of output
—> 46 next_word = transformer_utils.next_word(model, padded_question, tokenized_answer)
48 # Concat the predicted next word to the output
49 tokenized_answer = tf.concat([tokenized_answer, tf.expand_dims([next_word], 0)], axis=1)

File /tf/transformer_utils.py:577, in next_word(encoder_input, output, model)
568 “”"
569 Helper function that uses the model to predict just the next word.
570 Arguments:
(…)
574 predicted_id (tf.Tensor): The id of the predicted word
575 “”"
576 # Create a padding mask for the input
→ 577 enc_padding_mask = create_padding_mask(encoder_input)
578 # Create a look-ahead mask for the output
579 look_ahead_mask = create_look_ahead_mask(tf.shape(output)[1])

File /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:153, in filter_traceback..error_handler(*args, **kwargs)
151 except Exception as e:
152 filtered_tb = _process_traceback_frames(e.traceback)
→ 153 raise e.with_traceback(filtered_tb) from None
154 finally:
155 del filtered_tb

File /tmp/autograph_generated_filen4gat_e3.py:11, in outer_factory..inner_factory..tf__create_padding_mask(decoder_token_ids)
9 do_return = False
10 retval
= ag
_.UndefinedReturnValue()
—> 11 seq = (1 - ag__.converted_call(ag__.ld(tf).cast, (ag__.converted_call(ag__.ld(tf).math.equal, (ag__.ld(decoder_token_ids), 0), None, fscope), ag__.ld(tf).float32), None, fscope))
12 try:
13 do_return = True

TypeError: in user code:

File "/tf/transformer_utils.py", line 547, in create_padding_mask  *
    seq = 1 - tf.cast(tf.math.equal(decoder_token_ids, 0), tf.float32)

TypeError: Failed to convert elements of <transformer_utils.Transformer object at 0x7fd26df15970> to Tensor. Consider casting elements to a supported type. See https://www.tensorflow.org/api_docs/python/tf/dtypes for supported TF dtypes.

Hi @Pavamana

Can you first confirm if you have passed the previous two unittest?? with matching output with the expected output?

Could you share screenshot of your output with expected output for exercise 1 and exercise 2

Regards
DP

Coursera2
Coursera1



1 Like

Hi @Pavamana

issue lies in the below two codes

Predict the next word using the model, the input document and the current state of output
TO PREDICT THE NEXT WORD YOU WOULD REQUIRE TOKENIZED_QUESTION AND NOT PADDED_QUESTION

Next, for the step
Concat the predicted next word to the output
to get the predicted word, you are suppose to use tf.concat to the tokenised_answer with next_word, with axis mentioned. You do not require tf.expand_dims, kindly remove that.

Check this cell mentioned before exercise 3 with an example, but be sure for the concat step, the axis of 1 is correct, unlike in the below image axis used is -1

if you notice in the below image, to predict next word, tokenised_padded_question was used as the step before question is tokenised and padded is recalled as tokenised_padded_question, but in exercise 3 the steps asks for input document which in case of exercise 3 would tokenised_question.

Regards
DP

Okay Thanks. I will try

1 Like

Still the same error


TypeError Traceback (most recent call last)
Cell In[42], line 3
1 idx = 10408
----> 3 result = answer_question(inputs_train[idx], transformer, tokenizer)
4 print(colored(pretty_decode(result, sentinels, tokenizer).numpy()[0], ‘blue’))
5 print()

Cell In[41], line 46, in answer_question(question, model, tokenizer, encoder_maxlen, decoder_maxlen)
42 # Loop for decoder_maxlen iterations
43 for i in range(decoder_maxlen):
44
45 # Predict the next word using the model, the input document and the current state of output
—> 46 next_word = transformer_utils.next_word(model, tokenized_question, tokenized_answer)
48 # Concat the predicted next word to the output
49 tokenized_answer = tf.concat([tokenized_answer, next_word], axis=1)

File /tf/transformer_utils.py:577, in next_word(encoder_input, output, model)
568 “”"
569 Helper function that uses the model to predict just the next word.
570 Arguments:
(…)
574 predicted_id (tf.Tensor): The id of the predicted word
575 “”"
576 # Create a padding mask for the input
→ 577 enc_padding_mask = create_padding_mask(encoder_input)
578 # Create a look-ahead mask for the output
579 look_ahead_mask = create_look_ahead_mask(tf.shape(output)[1])

File /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:153, in filter_traceback..error_handler(*args, **kwargs)
151 except Exception as e:
152 filtered_tb = _process_traceback_frames(e.traceback)
→ 153 raise e.with_traceback(filtered_tb) from None
154 finally:
155 del filtered_tb

File /tmp/autograph_generated_filen4gat_e3.py:11, in outer_factory..inner_factory..tf__create_padding_mask(decoder_token_ids)
9 do_return = False
10 retval
= ag
_.UndefinedReturnValue()
—> 11 seq = (1 - ag__.converted_call(ag__.ld(tf).cast, (ag__.converted_call(ag__.ld(tf).math.equal, (ag__.ld(decoder_token_ids), 0), None, fscope), ag__.ld(tf).float32), None, fscope))
12 try:
13 do_return = True

TypeError: in user code:

File "/tf/transformer_utils.py", line 547, in create_padding_mask  *
    seq = 1 - tf.cast(tf.math.equal(decoder_token_ids, 0), tf.float32)

TypeError: Failed to convert elements of <transformer_utils.Transformer object at 0x7fd26df15970> to Tensor. Consider casting elements to a supported type. See https://www.tensorflow.org/api_docs/python/tf/dtypes for supported TF dtypes.

Kindly DM me code screenshot of all 3 exercise. Click on my name and then message

Regards
DP

Hi @Pavamana

For exercise 2
GRADED FUNCTION: parse_squad
Create the question/context sequence
for the context part you have used ’ context: ’ + context which is incorrect. It should be ’ context: ’ + paragraph[‘context’]

Remember the point mentions

  • Create the answer sequence using the first answer from the available answers.
    So it is paragraph from the article from which context was extracted.

Also in exercise 3,
kindly remove the below return statement, it is not any issue but it can cause auto grading isse as you have added it inside ###START AND END CODE HERE##
#return tokenizer.detokenize(tokenized_answer).numpy()[0].decode(‘utf-8’)

Another thing, right restarting the kernel and re-run the cell from beginning one by one once you make the correction I mentioned.

Let me know if you are still getting any error.

Regards
DP

Okay I will try

1 Like

is your issue resolved? for the error you were getting? kindly let me know!

No. Not yet resolved still getting the same error

1 Like

Can I see how you did the correction??Send the updated codes via personal DM.

Also share the screenshot of the error

Okay Sure

Exercise 2

GRADED FUNCTION: parse_squad

{codes removed by moderator}

return inputs, targets

Exercise 3

GRADED FUNCTION: answer_question
{codes removed by moderator}

return tokenized_answer

I will be back after 15 minutes, going for dinner

@Pavamana can you confirm if you are not making any other changes in any other cells other than grade cells?

the above error msg shows changes made to the tf datatype for transformer utils is incorrect.

please don’t post codes here. Kindly follow community guidelines. Follow FAQ section of Code of conduct. When you are back from dinner. Send your notebook via personal DM.

Also your error is not coming from a grader cell, it seemed the error tells tf datatype conversion in a non-reader cell has type error.

1 Like