I’m having the following error in Ex 1, djmodel funcition:
“AttributeError: The layer “lstm” has multiple inbound nodes, with different output shapes. Hence the notion of “output shape” is ill-defined for the layer. Use get_output_shape_at(node_index) instead.”
AttributeError Traceback (most recent call last)
in
1 # UNIT TEST
----> 2 output = summary(model)
3 comparator(output, djmodel_out)
~/work/W1A3/test_utils.py in summary(model)
33 result =
34 for layer in model.layers:
—> 35 descriptors = [layer.class.name, layer.output_shape, layer.count_params()]
36 if (type(layer) == Conv2D):
37 descriptors.append(layer.padding)
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in output_shape(self)
2190 'ill-defined for the layer. ’
2191 'Use get_output_shape_at(node_index) ’
→ 2192 ‘instead.’ % self.name)
2193
2194 @property
AttributeError: The layer “lstm” has multiple inbound nodes, with different output shapes. Hence the notion of “output shape” is ill-defined for the layer. Use get_output_shape_at(node_index) instead.
Ok, I think I found the problem and a solution that works, but I don’t understand the reason why.
in the code, there is:
X = Input(shape=(Tx, n_values))
but in the description of the step 2A says that X has a different shape:
2A. Select the ‘t’ time-step vector from X .
X has the shape (m, Tx, n_values).
(notice the additional ‘m’ to the dimensions)
so in my code, tested x = X[t,:] (according to X = Input(shape=(Tx, n_values)) in the code) but it didn’t work.
It only worked with x = X[:,t,:] (according to the description)
I don’t really understand then the mismatch and the fact that we need to slice from the batch dimension in order or it to work. Could anybody explain this in further detail? as I have notice that there are a lot of students having this issue as well.
Hi Federico,
In fact, we slice input data to get one time step data at time t.
When we construct a model, the batch size is unknown, the shape we specified in the input layer does not include batch size. So if you print out X.shape, you’ll see (None, Tx, n_values), None means undetermined, that’s why we’ve to slice X from 2nd dimension.
A major issue seems to be that the instructions in 2A - that X has shape (m, Tx, n_values) - doesn’t seem to agree with the statement about the input layer shape=(Tx, n_values).
using the “reshaper()” and “densor()” functions with the appropriate arguments.
used LSTM_cell() with the appropriate “inputs=” and “initial_state=[…]” arguments.
… and you still get an error about a layer having “multiple inbound nodes”, then try re-running all of the cells in the notebook. Sometimes that’s the only way to clear some error messages.
Hi all
This is my first time asking for help in code. But I am really stacked with this same error:
The layer “lstm” has multiple inbound nodes, with different output shapes. Hence the notion of “output shape” is ill-defined for the layer. Use get_output_shape_at(node_index) instead.
I am stacked in djmodel(). I think that I’m slicing X correctly (using only two dimension, as explained, not the ones that were mentioned early in this same thead); then I used reshaper, LSTM_cell and densor with (I think) correct options. And finally appended the output. But I cannot understand what I’m doing wrong. There are a few options for the layers, the instructions seem rather simple…
Sorry I know that my ask for help is rather vague, but I don’t know how much to explain without pasting the code.
Thanks @Kic for your response.
I think that initialization of a0 an dc0 was already included in the code. It was done using Input() and their shape are similar:(n_a,)
We set inputs=x, and initial_state=[a,c]
when calling LSTM_cell(), so if your slicing of X is correct, and the shape for a and c are correct, then there shouldn’t be any problem. In the first instance, a=a0, c=c0.
It is correct that a and c have the same shape , n_a. So what is left to investigate is x, what does the slicing look like?
Hi @Kic
LSTM_cell() input and initial states are as you described.
Regarding x, from Input() I assume that X is shape (Tx,n_values), then a slice x of X at t would be X[t,:]. And then x is reshaped with reshaper to (1,n_values).
I think you are right tracing the error (but I still cannot find it). I run only the comparator (two chunks ahead) and the error is
Test failed
Expected value
['TensorFlowOpLayer', [(None, 90)], 0]
does not match the input value:
['TensorFlowOpLayer', [(30, 90)], 0]