Trouble with UNet C4W3

wonder what am I doing wrong ): I keep getting this. Found some people have gotten similar errors but it seems they weren’t taking only the second element of the cblocks. However, i think my code is correct in that part, as I wrote things like this for the Ublocks (ublock6, cblock3[1], n_filters*4)

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Input In [16], in <cell line: 6>()
      3 img_width = 128
      4 num_channels = 3
----> 6 unet = unet_model((img_height, img_width, num_channels))
      7 comparator(summary(unet), outputs.unet_model_output)

Input In [15], in unet_model(input_size, n_filters, n_classes)
     18 cblock1 = conv_block(inputs, n_filters)
     19 # Chain the first element of the output of each block to be the input of the next conv_block. 
     20 # Double the number of filters at each new step
---> 21 cblock2 = conv_block(cblock1, n_filters*2)
     22 cblock3 = conv_block(cblock2, n_filters*4)
     23 cblock4 = conv_block(clobck3, n_filters*8, dropout_prob=0.3) # Include a dropout_prob of 0.3 for this layer

Input In [7], in conv_block(inputs, n_filters, dropout_prob, max_pooling)
      4 """
      5 Convolutional downsampling block
      6 
   (...)
     13     next_layer, skip_connection --  Next layer and skip connection outputs
     14 """
     16 ### START CODE HERE
---> 17 conv = Conv2D(n_filters, # Number of filters
     18               3,   # Kernel size   
     19               activation='relu',
     20               padding='same',
     21               kernel_initializer='he_normal')(inputs)
     22 conv = Conv2D(n_filters, # Number of filters
     23               3,   # Kernel size   
     24               activation='relu',
     25               padding='same',
     26               kernel_initializer='he_normal')(conv)
     27 ### END CODE HERE
     28 
     29 # if dropout_prob > 0 add a dropout layer, with the variable dropout_prob as parameter

File /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:67, in filter_traceback.<locals>.error_handler(*args, **kwargs)
     65 except Exception as e:  # pylint: disable=broad-except
     66   filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67   raise e.with_traceback(filtered_tb) from None
     68 finally:
     69   del filtered_tb

File /usr/local/lib/python3.8/dist-packages/keras/engine/input_spec.py:200, in assert_input_compatibility(input_spec, inputs, layer_name)
    197     raise TypeError(f'Inputs to a layer should be tensors. Got: {x}')
    199 if len(inputs) != len(input_spec):
--> 200   raise ValueError(f'Layer "{layer_name}" expects {len(input_spec)} input(s),'
    201                    f' but it received {len(inputs)} input tensors. '
    202                    f'Inputs received: {inputs}')
    203 for input_index, (x, spec) in enumerate(zip(inputs, input_spec)):
    204   if spec is None:

ValueError: Layer "conv2d_14" expects 1 input(s), but it received 2 input tensors. Inputs received: [<tf.Tensor 'Placeholder:0' shape=(None, 48, 64, 32) dtype=float32>, <tf.Tensor 'Placeholder_1:0' shape=(None, 96, 128, 32) dtype=float32>]

Take a more careful look at the error trace: you’re failing way before you get to the “ublock” phase of things. Please take another look at the instructions and at the structure of the conv_block function itself. It returns two values, right? So that is a 2-tuple. The way you wrote the code you are passing that whole 2-tuple as the input to the second layer. That is not what the diagram shows, right?

1 Like

tysm!! it was in the instructions for the cblocks all the time