AssertionError with create batch dataset Course 3 week 1 assignment NLP

When creating a post, please add:

  • Week # must be added in the tags option of the post.
  • Link to the classroom item you are referring to:
  • Description (include relevant info but please do not post solution code or your entire notebook)

Hello @arvyzukai,

Troubling you again, can you please provide me a hint where I am going wrong, I did review your msgs from other post, where it mentions to check line to tensor code being directed correctly but couldn’t crack it. Probably need more guidance on this

Sharing an error msg for the same exercise

Thank you in advance


Hi @Deepti_Prasad

Because the values are completely different, one place I would would look is in the way you shuffle the data generator (note the first parameter).
Another maybe the first exercise line_to_tensor() implementation (but I assume you passed all the tests and got the “expected output”? so it should not be the problem)
Or maybe you accidentally change the .prefetch() parameter (which in our case is given and should not be changed).

Let me know if it helps.

1 Like

Hello arvy,

Thank you for responding. Yes my line to tensor implementation gives me the expected output. So that section looks perfectly fine.

But in the create batch, where you are mentioning the shuffle part is actually already given by the grader with a buffer size of 10000.

My guess is I am doing something wrong with the conversion from data to tensor or while creating batch I am choosing the wrong seq output as I get the below output which differs from the expected output


Hey @arvyzukai,

you were right, I was doing the shuffle part incorrectly. Although shuffle link which you sent me is not seen in the assignment :frowning: The shuffle should have been added in the assignment instructions.

I hadn’t recalled the buffer size for the shuffle part.

Passed the test. Thanks again, owe you man :slight_smile:


1 Like

Cc: @arvyzukai

Hello :wave:t3:

I’m currently working on the assignment and have encountered an issue with the create_batch_dataset function. Despite following the instructions and reviewing the relevant course materials, I’m facing a persistent error that I’ve been unable to resolve.

Issue Description:
The function seems to be creating batches of input-target pairs, but the output does not match the expected results as per the unit test. Here’s a summary of the observed behavior and the error:

  1. Observed Output:

    • The function generates sequences like ‘FROM off a hill’, ‘ROM off a hill w’, ‘hose concave wom’, and ‘ose concave womb’.
    • The output format is consistent across different runs, including the representation as byte strings.
  2. Expected Output (As per the assignment):

    • The expected sequences are ‘and sight distra’, ‘nd sight distrac’, ‘when in his fair’, and 'hen in his fair '.
  3. Unit Test Error:

    • When running the provided unit test, I receive the following AssertionError:
      AssertionError: Wrong values. Expected [[28, 20, 23, 17, 9, 0, 0, 1], [30, 31, 0, 0, 10, 17, 17, 20]] but got: [[5 6 7 0 0 1 2 0] [29 30 0 0 9 16 16 19]]

Steps Taken:
I have reviewed my implementation, particularly focusing on vocabulary indexing, data preprocessing, sequence generation logic, and shuffling. Despite these efforts, the issue persists.

I would greatly appreciate any guidance or insights you might provide to help resolve this issue. Is there a specific aspect of the function implementation that I might be overlooking? Any suggestions or advice would be highly valuable.

Thank you for your time and assistance.

Can you DM me the original code that came with Exercise 02, please?

1 Like

I can see you have the same output as I had got., but we are not suppose to share codes :joy:

Although can give hint, in the below code line

Assemble the final dataset with shuffling, batching, and prefetching

Let me know if you understood.



Hello, DP

Despite several attempts to debug and adjust my implementation, I’m still facing an AssertionError when running the provided unit test.

Here’s the specific error message I’m getting:

AssertionError                            Traceback (most recent call last)
Cell In[32], line 2
      1 # UNIT TEST
----> 2 w1_unittest.test_create_batch_dataset(create_batch_dataset)

File /tf/, in test_create_batch_dataset(target)
     62     expected_in_line = [[28, 20, 23, 17,  9,  0,  0,  1],
     63                         [30, 31,  0,  0, 10, 17, 17, 20]]
     64     expected_out_line = [[20, 23, 17,  9,  0,  0,  1,  0],
     65                          [31,  0,  0, 10, 17, 17, 20,  0]]
---> 67     assert tf.math.reduce_all(tf.equal(in_line, expected_in_line)), \
     68         f"Wrong values. Expected {expected_in_line} but got: {in_line.numpy()}"
     69     assert tf.math.reduce_all(tf.equal(out_line, expected_out_line)), \
     70         f"Wrong values. Expected {expected_out_line} but got: {out_line.numpy()}"
     72 BATCH_SIZE = 4

AssertionError: Wrong values. Expected [[28, 20, 23, 17, 9, 0, 0, 1], [30, 31, 0, 0, 10, 17, 17, 20]] but got: [[27 19 22 16  8  0  0  0]
 [29 30  0  0  9 16 16 19]]

The issue seems to arise from the output of the create_batch_dataset function not matching the expected output in terms of the sequences generated. I have reviewed the vocabulary indexing, ensured proper sequence generation, and set the shuffle buffer size to 10,000 as specified. However, the problem persists.


Hello @RyeToast,

Thank you for your patience, Can you DM me a screenshot image of the create batch dataset grader cell. Click on my name and then message.

You can send arvy also, he is better person right now for this course to mentor, I am learner for this course right now.


1 Like

Additionally, based on insights gathered from the discussion board, I suspect that the issue may be related to how I’m shuffling the data in the create_batch_dataset function. I’ve set the shuffle buffer size to 10,000, as recommended, but I’m uncertain if the placement of the .shuffle() method in the data processing pipeline is correct. The method is currently applied after mapping the sequences with the split_input_target function and before batching.

Could the positioning or usage of the .shuffle() method be affecting the ordering or composition of the generated batches? If so, I would appreciate any advice on correctly configuring the shuffle operation to align with the expected outputs of the unit test.

Thank you once again. :slightly_smiling_face:

1 Like

Your help has been invaluable! :pray:t5:

1 Like

Why are you editing your grader cell outside ###START AND END CODE HERE??

or the mistake is you are using an older version of the assignment??

I suspect that you added few of the unnecessary codes in the create batch dataset.

Please use the latest version of assignments for Course 3 as the assignments have been recently updated.

Do not add or edit anything other than writing codes were asked to write,

I already told you on how to get a fresh copy of any assignment. Follow the same steps.

Share the grader cell image once you have obtained. :woman_facepalming: Share without codes.

Rye I honestly feel you should make sure you have a latest copy of assignment before you start doing the assignment, make sure you have updated the lab.

I noticed for two assignments you were working on the older version, So please make sure your system, browser is updated and cache is deleted.

I feel so bad that you were stuck again because you were working on an older version :ok_woman:

May God give you more patience :grin:


1 Like

:wave:t5: Hello,

Yep, I’ve updated. :+1:t4:
Between the START and END only… got it.:+1:t4:

Now I’m having trouble with Exercise 5 - log_perplexity

I’m getting an ‘All test passed!’ but an error message on the third cell down:

'ValueError Traceback (most recent call last)
Cell In[128], line 4
2 eval_text = “\n”.join(eval_lines)
3 eval_ids = line_to_tensor([eval_text], vocab)
----> 4 input_ids, target_ids = split_input_target(tf.squeeze(eval_ids, axis=0))
6 preds, status = model(tf.expand_dims(input_ids, 0), training=False, states=None, return_state=True)
8 #Get the log perplexity

Cell In[117], line 3, in split_input_target(sequence)
1 def split_input_target(sequence):
2 if tf.rank(sequence) == 0: # Check for scalar
----> 3 raise ValueError(“Cannot split scalar input. Ensure sequence has at least one dimension.”)
4 # Original slicing code
5 input_text = sequence[:-1]

ValueError: Cannot split scalar input. Ensure sequence has at least one dimension.’

1 Like

Rye, is your issue resolved?