Week 3 assignment 2

UNQ_C3

GRADED FUNCTION: unet_model

def unet_model(input_size=(96, 128, 3), n_filters=32, n_classes=23):
“”"
Unet model

Arguments:
    input_size -- Input shape 
    n_filters -- Number of filters for the convolutional layers
    n_classes -- Number of output classes
Returns: 
    model -- tf.keras.Model
"""
inputs = Input(input_size)
# Contracting Path (encoding)
# Add a conv_block with the inputs of the unet_ model and n_filters
### START CODE HERE
cblock1 = conv_block(inputs, n_filters)
# Chain the first element of the output of each block to be the input of the next conv_block. 
# Double the number of filters at each new step
cblock2 = conv_block(cblock1, n_filters *2)
cblock3 = conv_block(cblock2, 4*n_filters)
cblock4 = conv_block(cblock3, 8*n_filters, dropout=0.3) # Include a dropout of 0.3 for this layer
# Include a dropout of 0.3 for this layer, and avoid the max_pooling layer
cblock5 = conv_block(cblock4, 16*n_filters, dropout=0.3, max_pooling=False) 
### END CODE HERE

# Expanding Path (decoding)
# Add the first upsampling_block.
# Use the cblock5[0] as expansive_input and cblock4[1] as contractive_input and n_filters * 8
### START CODE HERE
ublock6 = upsampling_block(cblock5[0],cblock4[1] ,  n_filters * 8)
# Chain the output of the previous block as expansive_input and the corresponding contractive block output.
# Note that you must use the second element of the contractive block i.e before the maxpooling layer. 
# At each step, use half the number of filters of the previous block 
ublock7 = upsampling_block(cblock4, cblock3,   n_filters * 4)
ublock8 = upsampling_block(cblock3, cblock2,  n_filters * 2)
ublock9 = upsampling_block(cblock2, cblock1,  n_filters )
### END CODE HERE

conv9 = Conv2D(n_filters,
             3,
             activation='relu',
             padding='same',
             kernel_initializer='he_normal')(ublock9)

# Add a Conv2D layer with n_classes filter, kernel size of 1 and a 'same' padding
### START CODE HERE
conv10 = Conv2D(n_classes , 1, padding='same')(conv9)
### END CODE HERE

model = tf.keras.Model(inputs=inputs, outputs=conv10)

return model

i am getting this error

ValueError Traceback (most recent call last)
in
4 num_channels = 3
5
----> 6 unet = unet_model((img_height, img_width, num_channels))
7 comparator(summary(unet), outputs.unet_model_output)

in unet_model(input_size, n_filters, n_classes)
19 # Chain the first element of the output of each block to be the input of the next conv_block.
20 # Double the number of filters at each new step
—> 21 cblock2 = conv_block(cblock1, n_filters 2)(cblock1)
22 cblock3 = conv_block(cblock2, 4
n_filters)
23 cblock4 = conv_block(cblock3, 8*n_filters, dropout=0.3) # Include a dropout of 0.3 for this layer

in conv_block(inputs, n_filters, dropout_prob, max_pooling)
19 activation=‘relu’,
20 padding=‘same’,
—> 21 kernel_initializer= ‘he_normal’)(inputs)
22 conv = Conv2D(n_filters, # Number of filters
23 3, # Kernel size

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in call(self, *args, **kwargs)
924 if _in_functional_construction_mode(self, inputs, args, kwargs, input_list):
925 return self._functional_construction_call(inputs, args, kwargs,
→ 926 input_list)
927
928 # Maintains info about the Layer.call stack.

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in _functional_construction_call(self, inputs, args, kwargs, input_list)
1090 # TODO(reedwm): We should assert input compatibility after the inputs
1091 # are casted, not before.
→ 1092 input_spec.assert_input_compatibility(self.input_spec, inputs, self.name)
1093 graph = backend.get_graph()
1094 # Use self._name_scope() to avoid auto-incrementing the name.

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name)
156 str(len(input_spec)) + ’ inputs, ’
157 'but it received ’ + str(len(inputs)) +
→ 158 ’ input tensors. Inputs received: ’ + str(inputs))
159 for input_index, (x, spec) in enumerate(zip(inputs, input_spec)):
160 if spec is None:

ValueError: Layer conv2d_59 expects 1 inputs, but it received 2 input tensors. Inputs received: [<tf.Tensor ‘max_pooling2d_20/MaxPool:0’ shape=(None, 48, 64, 32) dtype=float32>, <tf.Tensor ‘conv2d_58/Relu:0’ shape=(None, 96, 128, 32) dtype=float32>]

Hey, so I had almost the exact same thing as you.

If you look in the instructions directly above cblock2, it says ’ Chain the FIRST (emphasis mine) element of the output of each block…" which, given that in ublock6, you used ‘cblock5[0]’, led me to realize that the cblocks output a list with multiple elements. As per the copied instructions, you need the first element, cblock1[0], cblock2[0], and so on for the subsequent cblocks.

Also, I had a problem with ‘dropout=0.3’. It wanted ‘dropout_prob=0.3’ like we did in an earlier function. Does that help?

3 Likes

In addition to @ChrisML’s valuable observations, note that the upsampling blocks take two inputs. The first upsampling block is different, but the rest of them take as their first argument the output of the previous upsampling block. Note that the upsampling blocks just produce one input, so you don’t need to index them to select the one you want.

3 Likes

Thanks @ChrisML @paulinpaloalto for the support, after some work, i found the indexes to be wrong and corrected them

1 Like

For upsampling blocks other than the first one, the first argument we pass is the output of the previous upsampling block i.e. ublock7 = upsampling_block(ublock6, None, n_filter*4)
What should we pass in the second argument?
I am a bit confused about that as it says you pass the output of the corresponding contractive block. Should we pass the whole output or one element of its output as it says you must use the second element of the contractive block.

The contractive blocks each have two outputs. The first one is the input to the next contractive block and the second one is the input to the corresponding upsampling block. Please see the big diagram of U-Net that shows how the “skip layers” work. That’s what is happening here: you are building the logic that implements that diagram, right?

There are also comments in the logic for the contractive blocks that explains what is going on.

1 Like