W1 A3 | Ex-1 | djmodel- AttributeError: layer “lstm” has multiple inbound nodes

I’m having the following error in Ex 1, djmodel funcition:

“AttributeError: The layer “lstm” has multiple inbound nodes, with different output shapes. Hence the notion of “output shape” is ill-defined for the layer. Use get_output_shape_at(node_index) instead.”

Any guesses on what could be the issue?

Many thanks.

5 Likes

BTW, This is the error output of the console:


AttributeError Traceback (most recent call last)
in
1 # UNIT TEST
----> 2 output = summary(model)
3 comparator(output, djmodel_out)

~/work/W1A3/test_utils.py in summary(model)
33 result =
34 for layer in model.layers:
—> 35 descriptors = [layer.class.name, layer.output_shape, layer.count_params()]
36 if (type(layer) == Conv2D):
37 descriptors.append(layer.padding)

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in output_shape(self)
2190 'ill-defined for the layer. ’
2191 'Use get_output_shape_at(node_index)
→ 2192 ‘instead.’ % self.name)
2193
2194 @property

AttributeError: The layer “lstm” has multiple inbound nodes, with different output shapes. Hence the notion of “output shape” is ill-defined for the layer. Use get_output_shape_at(node_index) instead.

2 Likes

and following the execution of the next cells, when I execute:

history = model.fit([X, a0, c0], list(Y), epochs=100, verbose = 0)

I get the following error:

InvalidArgumentError: Incompatible shapes: [30,256] vs. [32,256]
[[{{node while_17/body/_1/while/add}}]]
[[functional_1/lstm/PartitionedCall]] [Op:__inference_train_function_68849]

Function call stack:
train_function → train_function → train_function

which might give a hint of where the issue is, as it references a mismatch of shapes, but I still cannot figure out where the issue is…

2 Likes

I am also getting similar error but for music_inference_model()

4 Likes

Ok, I think I found the problem and a solution that works, but I don’t understand the reason why.
in the code, there is:

X = Input(shape=(Tx, n_values))

but in the description of the step 2A says that X has a different shape:

2A. Select the ‘t’ time-step vector from X .

  • X has the shape (m, Tx, n_values).

(notice the additional ‘m’ to the dimensions)

so in my code, tested x = X[t,:] (according to X = Input(shape=(Tx, n_values)) in the code) but it didn’t work.

It only worked with x = X[:,t,:] (according to the description)

I don’t really understand then the mismatch and the fact that we need to slice from the batch dimension in order or it to work. Could anybody explain this in further detail? as I have notice that there are a lot of students having this issue as well.

Many thanks for your help

11 Likes

Hi Federico,
In fact, we slice input data to get one time step data at time t.
When we construct a model, the batch size is unknown, the shape we specified in the input layer does not include batch size. So if you print out X.shape, you’ll see (None, Tx, n_values), None means undetermined, that’s why we’ve to slice X from 2nd dimension.

5 Likes

Hi @rishabh_jha ,
This thread may be helpful for you.

2 Likes

A major issue seems to be that the instructions in 2A - that X has shape (m, Tx, n_values) - doesn’t seem to agree with the statement about the input layer shape=(Tx, n_values).

I find the instructions rather confusing.

2 Likes

If you’ve handled all of the common errors:

  • slicing X correctly.
  • using the “reshaper()” and “densor()” functions with the appropriate arguments.
  • used LSTM_cell() with the appropriate “inputs=” and “initial_state=[…]” arguments.

… and you still get an error about a layer having “multiple inbound nodes”, then try re-running all of the cells in the notebook. Sometimes that’s the only way to clear some error messages.

29 Likes

Hi all
This is my first time asking for help in code. But I am really stacked with this same error:

The layer “lstm” has multiple inbound nodes, with different output shapes. Hence the notion of “output shape” is ill-defined for the layer. Use get_output_shape_at(node_index) instead.

I am stacked in djmodel(). I think that I’m slicing X correctly (using only two dimension, as explained, not the ones that were mentioned early in this same thead); then I used reshaper, LSTM_cell and densor with (I think) correct options. And finally appended the output. But I cannot understand what I’m doing wrong. There are a few options for the layers, the instructions seem rather simple…

Sorry I know that my ask for help is rather vague, but I don’t know how much to explain without pasting the code.

Thanks

1 Like

What line of code did you use for slicing X?
Go ahead and post that line of code. I’ll ask you to edit your reply and remove the code later.

Hi @mpsica,

How did you initialise a0, and c0? what shape has been assigned to them?

1 Like

I am getting the same error as yours. Have you been able to resolve it yet?

1 Like

Thanks @Kic for your response.
I think that initialization of a0 an dc0 was already included in the code. It was done using Input() and their shape are similar:(n_a,)

Hi @mpsica ,

We set inputs=x, and initial_state=[a,c]
when calling LSTM_cell(), so if your slicing of X is correct, and the shape for a and c are correct, then there shouldn’t be any problem. In the first instance, a=a0, c=c0.

It is correct that a and c have the same shape , n_a. So what is left to investigate is x, what does the slicing look like?

1 Like

Hi @Kic
LSTM_cell() input and initial states are as you described.
Regarding x, from Input() I assume that X is shape (Tx,n_values), then a slice x of X at t would be X[t,:]. And then x is reshaped with reshaper to (1,n_values).

I think you are right tracing the error (but I still cannot find it). I run only the comparator (two chunks ahead) and the error is

 Test failed
Expected value

     ['TensorFlowOpLayer', [(None, 90)], 0] 

     does not match the input value: 

     ['TensorFlowOpLayer', [(30, 90)], 0]
1 Like

For everybody in this same error, I did not already solve the problem, but I’m using a diagnosis that @edwardyu suggested elsewhere.

        print('before reshaper:', x.shape)
        x = reshaper(x)
        print('after reshaper:', x.shape)

I am observing that my slicing is not properly working.

2 Likes

Hi @mpsica ,

The slicing has to be X[;,t,:] in order for it to work.

6 Likes

Thanks @Kic
That worked! But, for everybody trying this, the solution only worked after I restarted the kernel and run everything again.

26 Likes

Hi @mpsica,

It is always a good practice to have a clean state to run your code after making changes.

5 Likes