C4 Week 2 Assignment 2 Transfer Learning with Mobilenet

I thought I had this programming assignment correct, but when I run the test I get the errors below.

I am wondering if it is is because I am adding the binary classification layers incorrectly. I have tried a few different methods, but currently I am trying to use:

x = tfl.GlobalAveragePooling2D()(base_model.output)

Is this the correct way to add the new layers to the base model we loaded without the top layers?


RuntimeError Traceback (most recent call last)
in
----> 1 model2 = alpaca_model(IMG_SIZE, data_augmentation)

in alpaca_model(image_shape, data_augmentation)
48
49 # use a prediction layer with one neuron (as a binary classifier only needs one)
—> 50 outputs = base_model.evaluate(inputs, x)
51
52 ### END CODE HERE

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
106 def _method_wrapper(self, *args, **kwargs):
107 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
→ 108 return method(self, *args, **kwargs)
109
110 # Running inside run_distribute_coordinator already.

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in evaluate(self, x, y, batch_size, verbose, sample_weight, steps, callbacks, max_queue_size, workers, use_multiprocessing, return_dict)
1333 _keras_api_gauge.get_cell(‘evaluate’).set(True)
1334 version_utils.disallow_legacy_graph(‘Model’, ‘evaluate’)
→ 1335 self._assert_compile_was_called()
1336 self._check_call_args(‘evaluate’)
1337 _disallow_inside_tf_function(‘evaluate’)

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in _assert_compile_was_called(self)
2567 # (i.e. whether the model is built and its inputs/outputs are set).
2568 if not self._is_compiled:
→ 2569 raise RuntimeError('You must compile your model before ’
2570 'training/testing. ’
2571 ‘Use model.compile(optimizer, loss).’)

RuntimeError: You must compile your model before training/testing. Use model.compile(optimizer, loss).

Not quite.

The instructions give you these hints:
image

Other hints:

  • For the preprocessing and base_model, you’re passing the ‘x’ data through those layers.

  • Then you only have the last line, where you need to pass (x) through a global average pooling layer.

OK, thank you. I was definitely making that part too hard. I guess i was having a mental block about where we were getting the pretrained model into our model from. I actually had that code once, and had changed it thinking I wasn’t doing it right.

I was also a bit confused about passing the data_augmenter function to the model. But I figured that out. It seems that setting x to result of the data_augmentation function was what I needed there, then the rest is pretty straightforward. It was definitely not clear to me how I was supposed to call that.

It passes all tests now, thanks for the help. I was getting way off track.

Right the initial ‘x’ values are the augmented input values.

Yeah, that was definitely not clear to me, but others seem to have no problem with it, so I guess it was just my misunderstanding. This is a bit of a different way of programming for me to mix the models in with the other code, but I’m getting it.

Thanks again.