C3W2 Exercise 5 - tensors in example_batch seem to have lost their shape

I have successfully completed exercises 1-4 and my outputs match the sample outputs and the unit tests all pass

I have defined a model as simply as possible following the instructions - ie I have defined the parameter to be passed in to the skeleton Input layer, using the maximum length of the sequences, then added the instructed Embedding (using the vocabulary size and embedding dimension constants), a GlobalAveragePooling1D and Dense layer (of size 5 and softmax activation as instructed). I have chosen Sparse Categorical Crossentropy for a loss function, and Adam as an optimizer.

I get 16,005 parameters in my model - smaller than the reference 20,000.

But then I get to the cell that calls evaluate on the model. This throws an exception and so it prints out the “Your model is not compatible with the dataset you defined earlier. Check that the loss function and last layer are compatible with one another.” string. As far as I know this loss function and the last layer are compatible.

So I stripped off the exception block and here is the error being thrown:


ValueError Traceback (most recent call last)
Cell In[57], line 2
1 example_batch = train_proc_dataset.take(1)
----> 2 model.evaluate(example_batch, verbose=False)

File /opt/conda/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py:122, in filter_traceback..error_handler(*args, **kwargs)
119 filtered_tb = _process_traceback_frames(e.traceback)
120 # To get the full stack trace, call:
121 # keras.config.disable_traceback_filtering()
→ 122 raise e.with_traceback(filtered_tb) from None
123 finally:
124 del filtered_tb

File /opt/conda/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py:122, in filter_traceback..error_handler(*args, **kwargs)
119 filtered_tb = _process_traceback_frames(e.traceback)
120 # To get the full stack trace, call:
121 # keras.config.disable_traceback_filtering()
→ 122 raise e.with_traceback(filtered_tb) from None
123 finally:
124 del filtered_tb

ValueError: Exception encountered when calling Sequential.call().

Cannot take the length of shape with unknown rank.

Arguments received by Sequential.call():
• inputs=tf.Tensor(shape=, dtype=int64)
• training=False

If I call print on example_batch I get this

<_TakeDataset element_spec=(TensorSpec(shape=, dtype=tf.int64, name=None), TensorSpec(shape=(None, 1), dtype=tf.int64, name=None))>

and I suspect that shape= in there is the problem – but how do I tell it to remember what shape it is?

That should say - if I call print on example_batch I get

<_TakeDataset element_spec=(TensorSpec(shape= < unknown > , dtype=tf.int64, name=None), TensorSpec(shape=(None, 1), dtype=tf.int64, name=None))>

and I suspect that shape = < unknown > in there is the problem

hi @cavhind123

can you please post screenshot rather than copy pasting your error. it helps lot better to find the issue with screenshot.

Can you just mention how you recalled your input shape layer?

and did you use tf.keras.Input or tf.keras.layers.Input.

your error also mentions you includes datatype to your input shape.

Hint: Shape should be basically the maximum length of all the sequences(This has been recalled by a global variable in the assignment)


Here is the error

Not sure what you mean by “recalled” my input shape layer. I used tf.keras.Input (I think this is the default that appears in the test code cell to start with) and passed it a shape parameter using MAX_LENGTH and then a blank 2nd dimension.

this is not required, remember input is only MAX_LENGTH which is list of maximum length of all sequences. so input is only [MAX_LENGTH]

tf.keras.Input is correct.

OK but shape(MAX_LENGTH) doesn’t run with an error ValueError: Cannot convert ‘120’ to a shape.

can you send me your code screenshot by personal DM. click on my name and then message.

didn’t you use embedding layer after the input layer?

Sent. Yes I have an Embedding layer.

@cavhind123

your input is tuple list, where as I mentioned you to use [list] to create a mutable list and not ( )

OK, changing to shape=[MAX_LENGTH] I still get the error

then I need to see your previous grade cell codes especially the preprocess dataset and train_val_datasets grade function.

You Also seem to be missing a hidden dense layer as per instruction

You can use any architecture you want but keep in mind that this problem doesn’t need many layers to be solved successfully. You don’t need any layers beside Embedding, GlobalAveragePooling1D and Dense layers but feel free to try out different architectures.

remember the last dense layer you used is dense layer output

Sent. I can try different architectures after I get the model to process one input :slight_smile:

true, but you are not follow architecture instructions to use atleast one hidden dense layer and then if your want to play with architecture.

Anyways, your preprocess dataset codes are incorrect as well as train_val requires correction.

While slicing dataset, remember the column are index as 0,1, 2 and not 1, 2, 3

so for text, it is on second column [:, 1] and labels is on first column[:, 0]

You also have included index value to the train text and train labels while splitting the dataset, which was not required.

For preprocess dataset, you were suppose to use only previously recalled functions with their respective text and label. You have used incorrect recall function for text

The two arguments to use for text and label are

text_vectorizer (tf.keras.layers.TextVectorization ): text vectorizer
label_encoder (tf.keras.layers.StringLookup): label encoder

no squeezer required.

OK so this was it, thank you: correcting the [:,1:2] to [:,1] and similarly for labels in the preprocessor means the squeeze can be deleted, and I corrected the name of text_vectorizer, and now I can get a model that achieves the required train and test accuracy!

Thank you,

Chris

1 Like

good day, i am having similar issues but his solution won’t work for my errors
ValueError: Cannot take the length of shape with unknown rank.

create a new topic for your issue, with proper brief description of your issue like assignment name, exercise section you got issue from and lastly screenshot of the complete error. Please make sure not to post any part of grade function codes which assess your assignments grades as it is violation of code of conduct and against community guidelines.

Don’t respond your query or issue on older threads, you always share similar post thread links in your created topic.

oh my.
I’m sorry about posting questions on older threads. I didn’t know how to create my own topic, but i will check more on that.
I’ve solved the issue. Thank you for responding.
have a nice day

1 Like

check faq section, it has all the post related to community guidelines, how to post, not to direct dm mentors until they ask you for your codes and many others. This should help you have better use of discourse community.

Happy to know you could resolve your issue yourself.

Keep learning!