W1 A3 | Ex-1 | djmodel- AttributeError: layer “lstm” has multiple inbound nodes

I also had the experience of getting an error message after I had put in the simple and correct code for slicing and reshaping. After clearing outputs and restarting the kernel, “all tests passed”

4 Likes

Restarting the kernel is what solved the issue for me. As I’ve spent rather much time on this error, others might find it helpful if you could provide this info in the notebook :slight_smile:

16 Likes

in train the model where:

history = model.fit([X, a0, c0], list(Y), epochs=100, verbose = 0)

I got the following error: Please advice


TypeError Traceback (most recent call last)
in
----> 1 history = model.fit([X, a0, c0], list(Y), epochs=100, verbose = 0)

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
106 def _method_wrapper(self, *args, **kwargs):
107 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
→ 108 return method(self, *args, **kwargs)
109
110 # Running inside run_distribute_coordinator already.

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1061 use_multiprocessing=use_multiprocessing,
1062 model=self,
→ 1063 steps_per_execution=self._steps_per_execution)
1064
1065 # Container that configures and calls tf.keras.Callbacks.

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/data_adapter.py in init(self, x, y, sample_weight, batch_size, steps_per_epoch, initial_epoch, epochs, shuffle, class_weight, max_queue_size, workers, use_multiprocessing, model, steps_per_execution)
1115 use_multiprocessing=use_multiprocessing,
1116 distribution_strategy=ds_context.get_strategy(),
→ 1117 model=model)
1118
1119 strategy = ds_context.get_strategy()

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/data_adapter.py in init(self, x, y, sample_weights, sample_weight_modes, batch_size, epochs, steps, shuffle, **kwargs)
273 inputs = pack_x_y_sample_weight(x, y, sample_weights)
274
→ 275 num_samples = set(int(i.shape[0]) for i in nest.flatten(inputs))
276 if len(num_samples) > 1:
277 msg = “Data cardinality is ambiguous:\n”

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/data_adapter.py in (.0)
273 inputs = pack_x_y_sample_weight(x, y, sample_weights)
274
→ 275 num_samples = set(int(i.shape[0]) for i in nest.flatten(inputs))
276 if len(num_samples) > 1:
277 msg = “Data cardinality is ambiguous:\n”

TypeError: int() argument must be a string, a bytes-like object or a number, not ‘NoneType’

1 Like

I had the same problem… I​t may sounds weird but after hours of dealing with this problem, It solved just by restarting kernel and clearing the outputs…No slicing needed or anything else…

6 Likes

Perhaps, I have a further important addition to this error message. I am a beginner in DeepL and this specialization brought me the first time into the world of TensorFlow. In the exercises I stumble over the same error and at first I considered it to be a dimension mismatch. But after a while I figured out with the help of StackOverflow (python - AttributeError: The layer "input_4" has multiple inbound nodes, with different output shapes - Stack Overflow) that the real problem was the usage of the global function declaration together with running the notebook in the Coursera cloud with an unlucky docker image configuration. After including the reshaper, densor, … into the assignment before the for loop, the code worked.

So, if you are sure that your slicing is correct, consider the approach described above.

3 Likes

Great, that helped a lot (after a week of poking around)

2 Likes

after wasting a massive amount of time, rerunning the code with [:,t:] turned out to be fine… thanks for the suggestion

2 Likes

Why it’s t instead of t-1 for the t-th element?

1 Like

Why does X[:, t, :] work?

If X= Input(shape=(Tx, n_values))it makes sense that the slice would look like X[t, :].

2 Likes

Hi @davidaguilaratx ,

Here is an extract from the implementation notes:

Inputs (given)

  • The Input() layer is used for defining the input X as well as the initial hidden state ‘a0’ and cell state c0 .
  • The shape parameter takes a tuple that does not include the batch dimension ( m ).
    • For example,
X = Input(shape=(Tx, n_values)) # X has 3 dimensions and not 2: (m, Tx, n_values)

Step 1: Outputs

  • Create an empty list “outputs” to save the outputs of the LSTM Cell at every time step.

Step 2: Loop through time steps (TODO)

  • Loop for 𝑡∈1,…,𝑇𝑥t∈1,…,Tx:

2A. Select the ‘t’ time-step vector from X .

  • X has the shape (m, Tx, n_values).
  • The shape of the ‘t’ selection should be (n_values,).
  • Recall that if you were implementing in numpy instead of Keras, you would extract a slice from a 3D numpy array like this:
var1 = array1[:,1,:]
1 Like

I had the same error, but it was solved by reloading and running all the code cells from the start

1 Like

i had the error:
[‘InputLayer’, [(None, 64)], 0]

does not match the input value:

[‘InputLayer’, [(None, 60)], 0]
How can 1 fix it?

1 Like

Hi,
Thank you for your help. In the end I restart and clear output, and it works. If someone has a similar concern that everything looks right but the output isn’t, restart may be a good choice.

1 Like

Thanks! I was baffled why it wasn’t working but that solved it!

1 Like

Hello TMosh!

Thank you for your help! I trust that you are the same mentor for the Machine Learning course. Thank you again!

1 Like

@Amy_Xu, yes I also mentor for the ML course.

Thank You Sir. That works.

1 Like

so how can I fix that issue?

2 Likes

Try the guidance from this thread:

Finally, after two hours of research, this solved the problem: A code change that I had made that didn’t work before restarting, did the trick after I restarted my kernel.
Thanks for this suggestions!

1 Like