C1_W3_Assignment 4.1 Training on a a Large Dataset

Dear Supervisior,

I’ve tried run the following code provided on both my local machine and our Coursera platform.

# Run this on your local machine only
# May cause the kernel to die if running in the Coursera platform

base_dir = HOME_DIR + "processed/"

with open(base_dir + "config.json") as json_file:
    config = json.load(json_file)

# Get generators for training and validation sets
train_generator = util.VolumeDataGenerator(config["train"], base_dir + "train/", batch_size=3, dim=(160, 160, 16), verbose=0)
valid_generator = util.VolumeDataGenerator(config["valid"], base_dir + "valid/", batch_size=3, dim=(160, 160, 16), verbose=0)

steps_per_epoch = 20
n_epochs=10
validation_steps = 20

model.fit_generator(generator=train_generator,
        steps_per_epoch=steps_per_epoch,
        epochs=n_epochs,
        use_multiprocessing=True,
        validation_data=valid_generator,
        validation_steps=validation_steps)

# run this cell if you to save the weights of your trained model in cell section 4.1
#model.save_weights(base_dir + 'my_model_pretrained.hdf5')

However, the same error occurs as follows:

Epoch 1/10

InvalidArgumentError Traceback (most recent call last)
in ()
17 use_multiprocessing=True,
18 validation_data=valid_generator,
—> 19 validation_steps=validation_steps)
20
21 # run this cell if you to save the weights of your trained model in cell section 4.1

/opt/conda/lib/python3.6/site-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
89 warnings.warn('Update your ' + object_name + ' call to the ’ +
90 'Keras 2 API: ’ + signature, stacklevel=2)
—> 91 return func(*args, **kwargs)
92 wrapper._original_function = func
93 return wrapper

/opt/conda/lib/python3.6/site-packages/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
1730 use_multiprocessing=use_multiprocessing,
1731 shuffle=shuffle,
→ 1732 initial_epoch=initial_epoch)
1733
1734 @interfaces.legacy_generator_methods_support

/opt/conda/lib/python3.6/site-packages/keras/engine/training_generator.py in fit_generator(model, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
218 sample_weight=sample_weight,
219 class_weight=class_weight,
→ 220 reset_metrics=False)
221
222 outs = to_list(outs)

/opt/conda/lib/python3.6/site-packages/keras/engine/training.py in train_on_batch(self, x, y, sample_weight, class_weight, reset_metrics)
1512 ins = x + y + sample_weights
1513 self._make_train_function()
→ 1514 outputs = self.train_function(ins)
1515
1516 if reset_metrics:

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/backend.py in call(self, inputs)
3474
3475 fetched = self._callable_fn(*array_vals,
→ 3476 run_metadata=self.run_metadata)
3477 self._call_fetch_callbacks(fetched[-len(self._fetches):])
3478 output_structure = nest.pack_sequence_as(

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/client/session.py in call(self, *args, **kwargs)
1470 ret = tf_session.TF_SessionRunCallable(self._session._session,
1471 self._handle, args,
→ 1472 run_metadata_ptr)
1473 if run_metadata:
1474 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

InvalidArgumentError: Incompatible shapes: [3] vs. [16]
[[{{node training/Adam/gradients/loss/activation_15_loss/soft_dice_loss/weighted_loss/mul_grad/Mul_1}}]]

Could u plz help ? Many Thanks !

Best Regards
Zuobin

1 Like

Hello Wu,

Is your issue resolved?

Your error log tells

There are two issues, one is with the input shape mismatch

Next, there seems to be error in some previous cells too, you are recalling some function with incorrect args.

In cas your issue is still not resolved, you can send your notebook via personal DM. Do not post codes on public post. It’s against community guidelines.

Regards
DP

Hi DP,

Thanks for the reply! I appreciate it!

The issue is still there. I’m working on it now. Sorry for posting codes on the public post.

What do u mean by personal DM?

Regards
Zuobin

Click on my name, and then message to send your codes

Hi DP,

Did u receive the code?

Regards
Zuobin

1 Like

Hey wu,

I am sorry, I was travelling. just have downloaded your notebook. please give me sometime, will get back to you.

Regards
DP

Hello Wu,

  1. For the below code, you need to apply tf to your code. You can refer the below link for hint
    tf.keras.utils.to_categorical  |  TensorFlow v1.15.0
    One-hot encode the categories.
    This adds a 4th dimension, ‘num_classes’
    (output_x, output_y, output_z, num_classes)

  2. For the below code apply numpy mean rather than image_slice.mean
    subtract the mean from image_slice

  3. For
    UNQ_C5 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
    def soft_dice_loss(y_true, y_pred, axis=(1, 2, 3),
    epsilon=0.00001):

you code is incorrect for dice_denominator, rather than using k.square use * for the dice_denominator code.

  1. for the same grader cell dice_loss, you have applied axis=0, kindly remove it as you can see in the define statement, axis = 1, 2, 3

  2. for
    UNQ_C6 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
    def compute_class_sens_spec(pred, label, class_num

to calculate tp, tn, fp and fn, as it is an array matrix calculation, and we use tuple, it should first pred values and then label values.

Do these corrections. Let me know once your issue is resolved. Again sorry for the delay in reply.

Regards
DP

1 Like

Hi DP,

The issue is resolved. The main problem is the axis setting as you mentioned in point 4.

Thanks for your time! I appreciate it from the bottom of my heart!

Have a nice trip!

Best Regards
Zuobin

1 Like