Emojify_V2 bug and doubts

Hi,

I’m getting this error trace and I don’t understand what is it pointing. Is the LSTM the problem? It looks good to me. When dealing with Keras the number of doubts multiply and the documentation is not that friendly. I have doubts also in my Model implementation.


ValueError Traceback (most recent call last)
in
22
23
—> 24 Emojify_V2_test(Emojify_V2)

in Emojify_V2_test(target)
16
17 maxLen = 4
—> 18 model = target((maxLen,), word_to_vec_map, word_to_index)
19
20 expectedModel = [[‘InputLayer’, [(None, 4)], 0], [‘Embedding’, (None, 4, 2), 30], [‘LSTM’, (None, 4, 128), 67072, (None, 4, 2), ‘tanh’, True], [‘Dropout’, (None, 4, 128), 0, 0.5], [‘LSTM’, (None, 128), 131584, (None, 4, 128), ‘tanh’, False], [‘Dropout’, (None, 128), 0, 0.5], [‘Dense’, (None, 5), 645, ‘linear’], [‘Activation’, (None, 5), 0]]

in Emojify_V2(input_shape, word_to_vec_map, word_to_index)
34 # Propagate X trough another LSTM layer with 128-dimensional hidden state
35 # The returned output should be a single hidden state, not a batch of sequences.
—> 36 X = LSTM(units = 128, return_sequences= False)(X)
37 # Add dropout with a probability of 0.5
38 X = Dropout(rate = 0.5 )(X)

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/layers/recurrent.py in call(self, inputs, initial_state, constants, **kwargs)
661
662 if initial_state is None and constants is None:
→ 663 return super(RNN, self).call(inputs, **kwargs)
664
665 # If any of initial_state or constants are specified and are Keras

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in call(self, *args, **kwargs)
924 if _in_functional_construction_mode(self, inputs, args, kwargs, input_list):
925 return self._functional_construction_call(inputs, args, kwargs,
→ 926 input_list)
927
928 # Maintains info about the Layer.call stack.

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in _functional_construction_call(self, inputs, args, kwargs, input_list)
1090 # TODO(reedwm): We should assert input compatibility after the inputs
1091 # are casted, not before.
→ 1092 input_spec.assert_input_compatibility(self.input_spec, inputs, self.name)
1093 graph = backend.get_graph()
1094 # Use self._name_scope() to avoid auto-incrementing the name.

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name)
178 ‘expected ndim=’ + str(spec.ndim) + ‘, found ndim=’ +
179 str(ndim) + '. Full shape received: ’ +
→ 180 str(x.shape.as_list()))
181 if spec.max_ndim is not None:
182 ndim = x.shape.ndims

ValueError: Input 0 of layer lstm_5 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 128]

Hi Edu4rd,

Based on the traceback, I would say the problem lies with the dimensions of the input X that is fed into the LSTM layer. This means that there is a problem with the code before this LSTM layer.

To debug this, it may help to print out the dimensions of input and output tensors to layers, and see if you can get the input to this LSTM layer (i.e. X) to have the 3 expected dimensions.

Hi,

My initial tensor: sentence_indices has this shape when running the test:
Tensor(“Shape_2:0”, shape=(2,), dtype=int32)
I debug it using: print(tensorflow.shape(sentence_indices))

After sending it to the embedding layer I have embeddings with this shape:
Tensor(“Shape_3:0”, shape=(3,), dtype=int32)
and this is what I feed to LSTM:
X = LSTM(units = 128, return_sequences= False)(embeddings)

Is this shape=(3,) the 3 expected dimensions you mentioned?

Thxs

Also, it’s specified that the model takes as input an array of sentences of shape ( m , max_len , ) defined by input_shape (a parameter) thus I just use it.

I would assume input then is (m,maxLen) and output (m, maxLen, 50)…

Just adding my thoughts for clarification

Thxs

Ok, I found the error. I was calling the two LSTM the same way…
:slight_smile:

Thxs