C5W3A2 - modelf - 'list' object has no attribute 'shape'

Hello, I keep getting the same error when running the modelf cell, even after making some changes like adding an input shape to the bidirectional LSTM. I couldn’t find anyone with a similart issue. Any help would be greatly appreciated.


AttributeError Traceback (most recent call last)
in
34
35
—> 36 modelf_test(modelf)

in modelf_test(target)
11
12
—> 13 model = target(Tx, Ty, n_a, n_s, len_human_vocab, len_machine_vocab)
14
15 print(summary(model))

in modelf(Tx, Ty, n_a, n_s, human_vocab_size, machine_vocab_size)
37
38 # Step 2.A: Perform one step of the attention mechanism to get back the context vector at step t (≈ 1 line)
—> 39 context = one_step_attention(a, s)
40
41 # Step 2.B: Apply the post-attention LSTM cell to the “context” vector.

in one_step_attention(a, s_prev)
20 # Use concatenator to concatenate a and s_prev on the last axis (≈ 1 line)
21 # For grading purposes, please list ‘a’ first and ‘s_prev’ second, in this order.
—> 22 concat = concatenator([a, s_prev])
23 # Use densor1 to propagate concat through a small fully-connected neural network to compute the “intermediate energies” variable e. (≈1 lines)
24 e = densor1(concat)

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in call(self, *args, **kwargs)
924 if _in_functional_construction_mode(self, inputs, args, kwargs, input_list):
925 return self._functional_construction_call(inputs, args, kwargs,
→ 926 input_list)
927
928 # Maintains info about the Layer.call stack.

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in _functional_construction_call(self, inputs, args, kwargs, input_list)
1115 try:
1116 with ops.enable_auto_cast_variables(self._compute_dtype_object):
→ 1117 outputs = call_fn(cast_inputs, *args, **kwargs)
1118
1119 except errors.OperatorNotAllowedInGraphError as e:

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/layers/merge.py in call(self, inputs)
181 return y
182 else:
→ 183 return self._merge_function(inputs)
184
185 @tf_utils.shape_type_conversion

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/layers/merge.py in _merge_function(self, inputs)
520
521 def _merge_function(self, inputs):
→ 522 return K.concatenate(inputs, axis=self.axis)
523
524 @tf_utils.shape_type_conversion

/opt/conda/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)
199 “”“Call target, and fall back on dispatchers if there is a TypeError.”“”
200 try:
→ 201 return target(*args, **kwargs)
202 except (TypeError, ValueError):
203 # Note: convert_to_eager_tensor currently raises a ValueError, not a

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/backend.py in concatenate(tensors, axis)
2868 “”"
2869 if axis < 0:
→ 2870 rank = ndim(tensors[0])
2871 if rank:
2872 axis %= rank

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/backend.py in ndim(x)
1333
1334 “”"
→ 1335 dims = x.shape._dims
1336 if dims is not None:
1337 return len(dims)

AttributeError: ‘list’ object has no attribute ‘shape’

EDIT - Just in case someone happens to make the same silly mistake as I have, be sure to type “return_sequences” in the LSTM layer and not “return_states” like I did (not for the first time) :sweat_smile:

Thank you!!! I’ve been beating my head against this for hours, and I don’t think I would have found that slip on my own.