C5 W3 A1: "ValueError: Data cardinality is ambiguous" in model.fit() , Neural_machine_translation

I get an error in the model.fit() function in the notebook “Neural_machine_translation_with_attention_v4a”.

This line causes the error:

model.fit([Xoh, s0, c0], outputs, epochs=1, batch_size=100)

This is the error:

ValueError: Data cardinality is ambiguous:
x sizes: 10000, 10, 10
y sizes: 10000, 10000, 10000, 10000, 10000, 10000, 10000, 10000, 10000, 10000
Please provide data which shares the same first dimension.

This line of code is given in the notebook, so I am surprised that it fails. Does anyone have the same issue? What is wrong there?
All my unit tests pass and I tried already to restart the kernel and run again, but the same error pops up.

Perhaps there is an error in your modelf() function.

{reply edited and padded to >20 characters}

For future readers:

Be careful that you don’t modify any of the cells that contain the unit tests. The grader expects those cells to be unmodified.

Adding new cells to the notebook can also cause issues with the grader.

For anybody interested, here is what caused the error:

In order to understand how the concatenator works I added a few lines:

m = 10
Tx = 30
n_a = 32
n_s = 64
b = np.random.uniform(1, 0, (m, Tx, 2 * n_a)).astype(np.float32)
t_prev =np.random.uniform(1, 0, (m, Tx, n_s)).astype(np.float32) * 1

The bold line caused the issue.
I had expected that, during fit time, the model takes m from the shape of Xoh (->10000), but somehow it was aware of m=10 as well and complained about ambiguity.
Using a different variable name solves the issue.