C4:W3:A2 exercise 3

I cant seem to understand my error in this step.

Your implementation of ublock7, ublock8, and ublock9 are wrong. In the notebook, we have:

# Chain the output of the previous block as expansive_input and the corresponding contractive block output.
# Note that you must use the second element of the contractive block i.e before the maxpooling layer. 

It says you must use the second element of the contractive block but it doesn’t say whether to use the first or second element of the expansive_input. Then why you are grabbing the first element (using [0])? Think about it.

Ok understood. Thank you.

I have tried using the 2nd element of the expanisive_input for block 7,8 and 9, however I still get the following error:

ValueError Traceback (most recent call last)
Input In [48], in <cell line: 6>()
3 img_width = 128
4 num_channels = 3
----> 6 unet = unet_model((img_height, img_width, num_channels))
7 comparator(summary(unet), outputs.unet_model_output)

Input In [47], in unet_model(input_size, n_filters, n_classes)
32 ublock6 = upsampling_block(cblock5[0], cblock4[1], n_filters8)
33 # Chain the output of the previous block as expansive_input and the corresponding contractive block output.
34 # Note that you must use the second element of the contractive block i.e before the maxpooling layer.
35 # At each step, use half the number of filters of the previous block
—> 36 ublock7 = upsampling_block(ublock6[1], cblock3[1], n_filters
4)
37 ublock8 = upsampling_block(ublock7[1], cblock2[1], n_filters*2)
38 ublock9 = upsampling_block(ublock8[1], cblock1[1], n_filters)

Input In [33], in upsampling_block(expansive_input, contractive_input, n_filters)
4 “”"
5 Convolutional upsampling block
6
(…)
12 conv – Tensor output
13 “”"
15 ### START CODE HERE
—> 16 up = Conv2DTranspose(
17 n_filters, # number of filters
18 3, # Kernel size
19 strides=2,
20 padding=‘same’)(expansive_input)
22 # Merge the previous output and the contractive_input
23 merge = concatenate([up, contractive_input], axis=3)

File /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:67, in filter_traceback..error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = _process_traceback_frames(e.traceback)
—> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb

File /usr/local/lib/python3.8/dist-packages/keras/engine/input_spec.py:228, in assert_input_compatibility(input_spec, inputs, layer_name)
226 ndim = x.shape.rank
227 if ndim is not None and ndim < spec.min_ndim:
→ 228 raise ValueError(f’Input {input_index} of layer “{layer_name}” ’
229 ‘is incompatible with the layer: ’
230 f’expected min_ndim={spec.min_ndim}, ’
231 f’found ndim={ndim}. ’
232 f’Full shape received: {tuple(shape)}’)
233 # Check dtype.
234 if spec.dtype is not None:

ValueError: Input 0 of layer “conv2d_transpose_31” is incompatible with the layer: expected min_ndim=4, found ndim=3. Full shape received: (12, 16, 256)

I think you missed the point of what Saif was saying in his response. Look at the definition of the contracting block function: it returns two outputs, right? That’s why you have to choose one to use as input to the upsampling block or the next contracting block.

But now look at the definition of the upsampling block function: how many values does it return? Just one tensor, right? So what happens when you index it with [0] or [1]? You’re just stripping off one of the dimensions, which is why it throws a … wait for it … dimension mismatch error.

2 Likes