I hope I posted this to the correct category.

This is for the ‘TensorFlow Developer Professional Certificate’, Course 3, Week 2 assignment.

So I got functions `train_val_split()`

, `fit_tokenizer()`

, `seq_and_pad()`

, `tokenize_labels()`

working, and all of these show the expected output.

But, I seem to struggle with the model fitting.

So, I created my model as such:

```
Model: "sequential_19"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_19 (Embedding) (None, 120, 16) 16000
global_average_pooling1d_10 (None, 16) 0
(GlobalAveragePooling1D)
dense_27 (Dense) (None, 16) 272
dense_28 (Dense) (None, 5) 85
=================================================================
Total params: 16,357
Trainable params: 16,357
Non-trainable params: 0
```

but it fails, because the subsequent layers after embedding do not get the data in the correct dimensions:

`ValueError: Shapes (None, 1) and (None, 5) are incompatible`

I noticed that the embedding layer does not produce the 16-dimension output, no matter what I do.

I verified that my variables and input arguments are the correct size and correct type:

```
train_padded_seq: 1780; <class 'numpy.ndarray'>
train_label_seq: 1780; <class 'numpy.ndarray'>
val_padded_seq: 445; <class 'numpy.ndarray'>
val_label_seq: 445; <class 'numpy.ndarray'>
```

What am I doing wrong?

Funny thing is, when I change the last Dense layer to have only 1 neuron in it, the training will succeed, but of course I am getting garbage at the output.