I have successfully completed exercises 1-4 and my outputs match the sample outputs and the unit tests all pass
I have defined a model as simply as possible following the instructions - ie I have defined the parameter to be passed in to the skeleton Input layer, using the maximum length of the sequences, then added the instructed Embedding (using the vocabulary size and embedding dimension constants), a GlobalAveragePooling1D and Dense layer (of size 5 and softmax activation as instructed). I have chosen Sparse Categorical Crossentropy for a loss function, and Adam as an optimizer.
I get 16,005 parameters in my model - smaller than the reference 20,000.
But then I get to the cell that calls evaluate on the model. This throws an exception and so it prints out the âYour model is not compatible with the dataset you defined earlier. Check that the loss function and last layer are compatible with one another.â string. As far as I know this loss function and the last layer are compatible.
So I stripped off the exception block and here is the error being thrown:
ValueError Traceback (most recent call last)
Cell In[57], line 2
1 example_batch = train_proc_dataset.take(1)
----> 2 model.evaluate(example_batch, verbose=False)
File /opt/conda/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py:122, in filter_traceback..error_handler(*args, **kwargs)
119 filtered_tb = _process_traceback_frames(e.traceback)
120 # To get the full stack trace, call:
121 # keras.config.disable_traceback_filtering()
â 122 raise e.with_traceback(filtered_tb) from None
123 finally:
124 del filtered_tb
File /opt/conda/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py:122, in filter_traceback..error_handler(*args, **kwargs)
119 filtered_tb = _process_traceback_frames(e.traceback)
120 # To get the full stack trace, call:
121 # keras.config.disable_traceback_filtering()
â 122 raise e.with_traceback(filtered_tb) from None
123 finally:
124 del filtered_tb
ValueError: Exception encountered when calling Sequential.call().
Cannot take the length of shape with unknown rank.
Arguments received by Sequential.call():
⢠inputs=tf.Tensor(shape=, dtype=int64)
⢠training=False
If I call print on example_batch I get this
<_TakeDataset element_spec=(TensorSpec(shape=, dtype=tf.int64, name=None), TensorSpec(shape=(None, 1), dtype=tf.int64, name=None))>
and I suspect that shape= in there is the problem â but how do I tell it to remember what shape it is?