WEEK3 Assignment2 Image Segmentation with U-Net

Hi~
I found an error in my code:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-23-4a1225a2a898> in <module>
      4 num_channels = 3
      5 
----> 6 unet = unet_model((img_height, img_width, num_channels))
      7 comparator(summary(unet), outputs.unet_model_output)

<ipython-input-22-76a1081f7160> in unet_model(input_size, n_filters, n_classes)
     21     cblock2 = conv_block(cblock1[0], n_filters*2)
     22     cblock3 = conv_block(cblock2[0], n_filters*4)
---> 23     cblock4 = conv_block(cblock3[0], n_filters*8, dropout=0.3) # Include a dropout of 0.3 for this layer
     24     # Include a dropout of 0.3 for this layer, and avoid the max_pooling layer
     25     cblock5 = conv_block(cblock4[0], n_filters*16, dropout=0.3, max_pooling=False)

**TypeError: conv_block() got an unexpected keyword argument 'dropout'**

but when i change dropout=0.3 to dropout_prob=0.3 , run the code and the error be like:

ValueError                                Traceback (most recent call last)
<ipython-input-37-4a1225a2a898> in <module>
      4 num_channels = 3
      5 
----> 6 unet = unet_model((img_height, img_width, num_channels))
      7 comparator(summary(unet), outputs.unet_model_output)

<ipython-input-36-80c996b208fd> in unet_model(input_size, n_filters, n_classes)
     34     # Note that you must use the second element of the contractive block i.e before the maxpooling layer.
     35     # At each step, use half the number of filters of the previous block
---> 36     ublock7 = upsampling_block(ublock6[0], cblock3[1],  n_filters * 4)
     37     ublock8 = upsampling_block(ublock7[0], cblock2[1],  n_filters * 2)
     38     ublock9 = upsampling_block(ublock8[0], cblock1[1],  n_filters * 1)

<ipython-input-30-46fec608dc9c> in upsampling_block(expansive_input, contractive_input, n_filters)
     18                  (3,3),    # Kernel size
     19                  strides=(2,2),
---> 20                  padding='same')(expansive_input)
     21 
     22     # Merge the previous output and the contractive_input

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
    924     if _in_functional_construction_mode(self, inputs, args, kwargs, input_list):
    925       return self._functional_construction_call(inputs, args, kwargs,
--> 926                                                 input_list)
    927 
    928     # Maintains info about the `Layer.call` stack.

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in _functional_construction_call(self, inputs, args, kwargs, input_list)
   1090       # TODO(reedwm): We should assert input compatibility after the inputs
   1091       # are casted, not before.
-> 1092       input_spec.assert_input_compatibility(self.input_spec, inputs, self.name)
   1093       graph = backend.get_graph()
   1094       # Use `self._name_scope()` to avoid auto-incrementing the name.

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name)
    194                          ', found ndim=' + str(ndim) +
    195                          '. Full shape received: ' +
--> 196                          str(x.shape.as_list()))
    197     # Check dtype.
    198     if spec.dtype is not None:

ValueError: Input 0 of layer conv2d_transpose_19 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [12, 16, 256]

I’m not sure which step might be wrong

“dropout_prob” is the correct argument to use.

When you use upsampling_block(), do not use the square bracket indexes for the ublock? references.

4 Likes

Exactly. The point is that on the “downsampling” path, the conv_block function has two outputs, so you need to select which one you want. But on the upsampling path, the function only has one output. So indexing it the way you did is a mistake: you end up peeling off one of the dimensions of the input tensor and it causes the dimension mismatch error you got.

5 Likes