Course 5 Week 1 Assignment 3: problem with model.fit() for djmodel

All the test cases prior to this line has passed. The model was succesfully created with coreect number of parameters and it is compiled succesfully with the mentioned loss function and optimizer. But when I am running this cell, which is not supposed to be edited:
history = model.fit([X, a0, c0], list(Y), epochs=100, verbose = 0)

I am getting a long error whose last line reads:
ValueError: The two structures don’t have the same sequence length. Input structure has length 30, while shallow structure has length 2.

I tried restarting the kernel and ran it thrice. But the problem persists
Can you suggest the solution?

Please post the entire error message, not just the ending.

The critical items (most commonly missed) in djmodel() are:

  • how you slice X to get x. Step 2A in the instructions has a hint for this.
  • what variables you use in LSTM_cell() for the initial_state.
  • use the .append() method to add out to outputs.

Note that any time you edit the djmodel() function, you must go back and re-run the previous cell where LSTM_cell() is defined.

That’s because LSTM_cell() is a global object, and it is instantiated the first time it is used in djmodel(). So if djmodel() sends it data that causes it to be initialized incorrectly, you have to re-create it (by running the previous cell again, or by restarting the kernel).

my mistake, this is the complete detail of error i got:

ValueError Traceback (most recent call last)
in
----> 1 history = model.fit([X, a0, c0], list(Y), epochs=100, verbose = 0)

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
106 def _method_wrapper(self, *args, **kwargs):
107 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
→ 108 return method(self, *args, **kwargs)
109
110 # Running inside run_distribute_coordinator already.

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1096 batch_size=batch_size):
1097 callbacks.on_train_batch_begin(step)
→ 1098 tmp_logs = train_function(iterator)
1099 if data_handler.should_sync:
1100 context.async_wait()

/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in call(self, *args, **kwds)
778 else:
779 compiler = “nonXla”
→ 780 result = self._call(*args, **kwds)
781
782 new_tracing_count = self._get_tracing_count()

/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
821 # This is the first call of call, so we have to initialize.
822 initializers =
→ 823 self._initialize(args, kwds, add_initializers_to=initializers)
824 finally:
825 # At this point we know that the initialization is complete (or less

/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
695 self._concrete_stateful_fn = (
696 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
→ 697 *args, **kwds))
698
699 def invalid_creator_scope(*unused_args, **unused_kwds):

/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
2853 args, kwargs = None, None
2854 with self._lock:
→ 2855 graph_function, _, _ = self._maybe_define_function(args, kwargs)
2856 return graph_function
2857

/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
3211
3212 self._function_cache.missed.add(call_context_key)
→ 3213 graph_function = self._create_graph_function(args, kwargs)
3214 self._function_cache.primary[cache_key] = graph_function
3215 return graph_function, args, kwargs

/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3073 arg_names=arg_names,
3074 override_flat_arg_shapes=override_flat_arg_shapes,
→ 3075 capture_by_value=self._capture_by_value),
3076 self._function_attributes,
3077 function_spec=self.function_spec,

/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
984 _, original_func = tf_decorator.unwrap(python_func)
985
→ 986 func_outputs = python_func(*func_args, **func_kwargs)
987
988 # invariant: func_outputs contains only Tensors, CompositeTensors,

/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
598 # wrapped allows AutoGraph to swap in a converted function. We give
599 # the function a weak reference to itself to avoid a reference cycle.
→ 600 return weak_wrapped_fn().wrapped(*args, **kwds)
601 weak_wrapped_fn = weakref.ref(wrapped_fn)
602

/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
971 except Exception as e: # pylint:disable=broad-except
972 if hasattr(e, “ag_error_metadata”):
→ 973 raise e.ag_error_metadata.to_exception(e)
974 else:
975 raise

ValueError: in user code:

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:806 train_function  *
    return step_function(self, iterator)
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:796 step_function  **
    outputs = model.distribute_strategy.run(run_step, args=(data,))
/opt/conda/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:1211 run
    return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/opt/conda/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2585 call_for_each_replica
    return self._call_for_each_replica(fn, args, kwargs)
/opt/conda/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2945 _call_for_each_replica
    return fn(*args, **kwargs)
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:789 run_step  **
    outputs = model.train_step(data)
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:759 train_step
    self.compiled_metrics.update_state(y, y_pred, sample_weight)
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/compile_utils.py:388 update_state
    self.build(y_pred, y_true)
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/compile_utils.py:319 build
    self._metrics, y_true, y_pred)
/opt/conda/lib/python3.7/site-packages/tensorflow/python/util/nest.py:1139 map_structure_up_to
    **kwargs)
/opt/conda/lib/python3.7/site-packages/tensorflow/python/util/nest.py:1221 map_structure_with_tuple_paths_up_to
    expand_composites=expand_composites)
/opt/conda/lib/python3.7/site-packages/tensorflow/python/util/nest.py:854 assert_shallow_structure
    input_length=len(input_tree), shallow_length=len(shallow_tree)))

ValueError: The two structures don't have the same sequence length. Input structure has length 30, while shallow structure has length 2.

I rechecked the slicing of X and inputs of the LSTM_cell() on your suggestion. I think they are fine. I will post you the line of code personally.