Hey guys,
i have some problems with the inference model and i am not able to fix it.
When running my code it tells me:
Input 0 is incompatible with layer lstm: expected shape=(None, None, 90), found shape=[90, 1, 1]
Seems like something with the shapes is wrong.
When I run the block again the error changes to:
Layer lstm expects 5 inputs, but it received 3 input tensors. Inputs received: [<tf.Tensor ‘input_3:0’ shape=(None, 1, 90) dtype=float32>, <tf.Tensor ‘a0_2:0’ shape=(None, 64) dtype=float32>, <tf.Tensor ‘c0_2:0’ shape=(None, 64) dtype=float32>]
Restarting the kernel brings me back to the first error.
Actually I am quite confused. With the instructions of the task i have written the following:
a, _, c = LSTM_cell(x, initial_state=[a, c])
This is more or less the same like in task 2 C of the djmodel, where it works…
x, a and c are in this task predefined in the lines above as:
x0 = Input(shape=(1, n_values))
a0 = Input(shape=(n_a,), name=‘a0’)
c0 = Input(shape=(n_a,), name=‘c0’)
Updating to the latest version did not fix the problem, like in another thread. I think some shapes must be wrong, but i am not sure about it.
Can somebody help me?
Bests,
Hinnerk8