C5W1A3 - Jazz with LSTM - Issues implimenting "LSTM_cell"

Hello,

I saw a lot of similar problems but still couldn’t fix mine. I have tried restarting the kernel and running the code, and I added a line of code before “inference model =” according to this post, but to no avail. I am not sure what was wrong with my code, any help is appreciated.

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-15-c55ce93f25e1> in <module>
      1 LSTM_cell = LSTM(n_a, return_state = True)
----> 2 inference_model = music_inference_model(LSTM_cell, densor, Ty = 50)

<ipython-input-14-ac2a177ce1e2> in music_inference_model(LSTM_cell, densor, Ty)
     38     for t in range(Ty):
     39         # Step 2.A: Perform one step of LSTM_cell. Use "x", not "x0" (≈1 line)
---> 40         a, _, c = LSTM_cell(x, initial_state=[a, c])
     41 
     42         # Step 2.B: Apply Dense layer to the hidden state output of the LSTM_cell (≈1 line)

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/layers/recurrent.py in __call__(self, inputs, initial_state, constants, **kwargs)
    707       # Perform the call with temporarily replaced input_spec
    708       self.input_spec = full_input_spec
--> 709       output = super(RNN, self).__call__(full_input, **kwargs)
    710       # Remove the additional_specs from input spec and keep the rest. It is
    711       # important to keep since the input spec was populated by build(), and

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
    924     if _in_functional_construction_mode(self, inputs, args, kwargs, input_list):
    925       return self._functional_construction_call(inputs, args, kwargs,
--> 926                                                 input_list)
    927 
    928     # Maintains info about the `Layer.call` stack.

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in _functional_construction_call(self, inputs, args, kwargs, input_list)
   1090       # TODO(reedwm): We should assert input compatibility after the inputs
   1091       # are casted, not before.
-> 1092       input_spec.assert_input_compatibility(self.input_spec, inputs, self.name)
   1093       graph = backend.get_graph()
   1094       # Use `self._name_scope()` to avoid auto-incrementing the name.

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name)
    225                                ' is incompatible with layer ' + layer_name +
    226                                ': expected shape=' + str(spec.shape) +
--> 227                                ', found shape=' + str(shape))
    228 
    229 

ValueError: Input 0 is incompatible with layer lstm_1: expected shape=(None, None, 90), found shape=[90, 1, 1]

Resetting a cell with LSTM_cell = LSTM(n_a, return_state = True) is only for debugging purpose. Please remove it when you complete implementation. Here is the thread related to this issue.

In your case, most likely “x” is transformed incorrectly during iterations.
As you see there are multiple steps to transform “x”. It is better for you to check dimensions of “x” on each step, and see which code transforms it incorrectly.

In each iteration,

Input to LSTM_cell : x.shape = (None, 1, 90)
After argmax : x.shape = (None,)
After one_hot : x.shape = (None, 90)
After RepeatVector: x.shape = (None, 1, 90)

You see the shape of the output from the RepeatVector is identical to the input to the LSTM_cell. In you case, most likely those are not.

Hope this helps.

Thank you! Issue solved