Attribute Error in C3_W1 Assignment Q6

Hi, I am new to DeepLearning.AI. I’m encountering a different type of error for Q6
‘EvalTask’ object has no attribute ‘weights’.
My lab ID is: utzmghixtiyl.
Your help will be highly appreciated. Thanks a lot.

Hi, I was able to rectify the attribute error. But now I’m getting an AssertionError: Invalid shape (16, 1); expected (16,). I checked the other public threads which talked about similar issues but still unable to rectify it.

Hey @Shaun_Basil_Almeida,
Seems to me that at some place, you have either missed out on reshaping or perhaps missed out on keepdims = True. Either way, can you please post your error stack, so that we can try and figure out the exact issue.

Cheers,
Elemento

Here are the error stacks:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
 in 
      2 # Take a look on how the eval_task is inside square brackets and
      3 # take that into account for you train_model implementation
----> 4 training_loop = train_model(model, train_task, [eval_task], 100, output_dir_expand)

 in train_model(classifier, train_task, eval_task, n_steps, output_dir)
     21                                 eval_tasks=[eval_task], # The evaluation task
     22                                 output_dir=output_dir, # The output directory
---> 23                                 random_seed=31 # Do not modify this random seed in order to ensure reproducibility and for grading purposes.
     24     ) 
     25 

/opt/conda/lib/python3.7/site-packages/trax/supervised/training.py in __init__(self, model, tasks, eval_model, eval_tasks, output_dir, checkpoint_at, checkpoint_low_metric, checkpoint_high_metric, permanent_checkpoint_at, eval_at, which_task, n_devices, random_seed, loss_chunk_size, use_memory_efficient_trainer, adasum, callbacks)
    300     metric_names = [
    301         name  # pylint: disable=g-complex-comprehension
--> 302         for eval_task in self._eval_tasks
    303         for name in eval_task.metric_names
    304     ]

/opt/conda/lib/python3.7/site-packages/trax/supervised/training.py in (.0)
    301         name  # pylint: disable=g-complex-comprehension
    302         for eval_task in self._eval_tasks
--> 303         for name in eval_task.metric_names
    304     ]
    305     self._rjust_len = max(map(len, loss_names + metric_names))

AttributeError: 'list' object has no attribute 'metric_names'
---------------------------------------------------------------------------
LayerError                                Traceback (most recent call last)
<ipython-input-102-7c2f23c889cd> in <module>
      6     pass
      7 
----> 8 w1_unittest.test_train_model(train_model(classifier(), train_task, [eval_task], 10, './model_test/'))

<ipython-input-100-918bf0dfcb18> in train_model(classifier, train_task, eval_task, n_steps, output_dir)
     20                                 eval_tasks=eval_task, # The evaluation task
     21                                 output_dir=output_dir, # The output directory
---> 22                                 random_seed=31 # Do not modify this random seed in order to ensure reproducibility and for grading purposes.
     23     ) 
     24 

/opt/conda/lib/python3.7/site-packages/trax/supervised/training.py in __init__(self, model, tasks, eval_model, eval_tasks, output_dir, checkpoint_at, checkpoint_low_metric, checkpoint_high_metric, permanent_checkpoint_at, eval_at, which_task, n_devices, random_seed, loss_chunk_size, use_memory_efficient_trainer, adasum, callbacks)
    305     self._rjust_len = max(map(len, loss_names + metric_names))
    306     self._evaluator_per_task = tuple(
--> 307         self._init_evaluator(eval_task) for eval_task in self._eval_tasks)
    308 
    309     if self._output_dir is None:

/opt/conda/lib/python3.7/site-packages/trax/supervised/training.py in <genexpr>(.0)
    305     self._rjust_len = max(map(len, loss_names + metric_names))
    306     self._evaluator_per_task = tuple(
--> 307         self._init_evaluator(eval_task) for eval_task in self._eval_tasks)
    308 
    309     if self._output_dir is None:

/opt/conda/lib/python3.7/site-packages/trax/supervised/training.py in _init_evaluator(self, eval_task)
    364     """Initializes the per-task evaluator."""
    365     model_with_metrics = _model_with_metrics(
--> 366         self._eval_model, eval_task)
    367     if self._use_memory_efficient_trainer:
    368       return _Evaluator(

/opt/conda/lib/python3.7/site-packages/trax/supervised/training.py in _model_with_metrics(model, eval_task)
   1047   """
   1048   return _model_with_ends(
-> 1049       model, eval_task.metrics, shapes.signature(eval_task.sample_batch)
   1050   )
   1051 

/opt/conda/lib/python3.7/site-packages/trax/supervised/training.py in _model_with_ends(model, end_layers, batch_signature)
   1029   metrics_layer = tl.Branch(*end_layers)
   1030   metrics_input_signature = model.output_signature(batch_signature)
-> 1031   _, _ = metrics_layer.init(metrics_input_signature)
   1032 
   1033   model_with_metrics = tl.Serial(model, metrics_layer)

/opt/conda/lib/python3.7/site-packages/trax/layers/base.py in init(self, input_signature, rng, use_cache)
    309       name, trace = self._name, _short_traceback(skip=3)
    310       raise LayerError(name, 'init', self._caller,
--> 311                        input_signature, trace) from None
    312 
    313   def init_from_file(self, file_name, weights_only=False, input_signature=None):

LayerError: Exception passing through layer Branch (in init):
  layer created in file [...]/trax/supervised/training.py, line 1029
  layer input shapes: (ShapeDtype{shape:(16, 1, 2), dtype:float32}, ShapeDtype{shape:(16,), dtype:int32}, ShapeDtype{shape:(16,), dtype:int32})

  File [...]/trax/layers/combinators.py, line 106, in init_weights_and_state
    outputs, _ = sublayer._forward_abstract(inputs)

LayerError: Exception passing through layer Parallel (in _forward_abstract):
  layer created in file [...]/trax/supervised/training.py, line 1029
  layer input shapes: (ShapeDtype{shape:(16, 1, 2), dtype:float32}, ShapeDtype{shape:(16,), dtype:int32}, ShapeDtype{shape:(16,), dtype:int32}, ShapeDtype{shape:(16, 1, 2), dtype:float32}, ShapeDtype{shape:(16,), dtype:int32}, ShapeDtype{shape:(16,), dtype:int32})

  File [...]/jax/interpreters/partial_eval.py, line 411, in abstract_eval_fun
    lu.wrap_init(fun, params), avals, debug_info)

  File [...]/jax/interpreters/partial_eval.py, line 1252, in trace_to_jaxpr_dynamic
    jaxpr, out_avals, consts = trace_to_subjaxpr_dynamic(fun, main, in_avals)

  File [...]/jax/interpreters/partial_eval.py, line 1262, in trace_to_subjaxpr_dynamic
    ans = fun.call_wrapped(*in_tracers)

  File [...]/site-packages/jax/linear_util.py, line 166, in call_wrapped
    ans = self.f(*args, **dict(self.params, **kwargs))

  File [...]/site-packages/jax/linear_util.py, line 166, in call_wrapped
    ans = self.f(*args, **dict(self.params, **kwargs))

LayerError: Exception passing through layer Parallel (in pure_fn):
  layer created in file [...]/trax/supervised/training.py, line 1029
  layer input shapes: (ShapeDtype{shape:(16, 1, 2), dtype:float32}, ShapeDtype{shape:(16,), dtype:int32}, ShapeDtype{shape:(16,), dtype:int32}, ShapeDtype{shape:(16, 1, 2), dtype:float32}, ShapeDtype{shape:(16,), dtype:int32}, ShapeDtype{shape:(16,), dtype:int32})

  File [...]/trax/layers/combinators.py, line 211, in forward
    sub_outputs, sub_state = layer.pure_fn(x, w, s, r, use_cache=True)

LayerError: Exception passing through layer WeightedCategoryAccuracy (in pure_fn):
  layer created in file [...]/<ipython-input-97-e22a181c30d5>, line 21
  layer input shapes: (ShapeDtype{shape:(16, 1, 2), dtype:float32}, ShapeDtype{shape:(16,), dtype:int32}, ShapeDtype{shape:(16,), dtype:int32})

  File [...]/trax/layers/base.py, line 743, in forward
    raw_output = self._forward_fn(inputs)

  File [...]/trax/layers/base.py, line 784, in _forward
    return f(*xs)

  File [...]/trax/layers/metrics.py, line 195, in f
    shapes.assert_same_shape(predictions, targets)

  File [...]/site-packages/trax/shapes.py, line 140, in assert_same_shape
    assert_shape_equals(array1, array2.shape)

  File [...]/site-packages/trax/shapes.py, line 134, in assert_shape_equals
    'Invalid shape {}; expected {}.'.format(array.shape, shape)

AssertionError: Invalid shape (16, 1); expected (16,).

Hey @Shaun_Basil_Almeida,
Once again, the issue lies in your implementation of train_model only. You have specified [eval_task] as the argument for the parameter eval_model, whereas you are supposed to specify it as the argument for eval_tasks. Please take a look at the documentation here once again, to understand the meaning of different parameters. Let us know if this resolves your issue.

Cheers,
Elemento