Course 4 week 3 unet assignment: inconsistent graphs and doubt on upsampling block

I see two issues which are really confusing for me. I appreciate any clarification:

  1. There are inconsistencies in the graphs/figures. In Figure 2, the green arrows in the decoding part show transposed convolution while in Figure 4, the green arrows show max pooling. Figure 4 must be wrong.
  2. In the code for upsampling_block the order of steps is different from the graph in Figure2.
    In the figure the order is as follows: concatenateConv2DConv2DConv2DTranspose. In the code, the order is: Conv2DTransposeConv2DConv2Dconcatenate

Hey @pkhateri,
Welcome to the community. Thanks for pointing out the inconsistency with Figure 4. I will report it to the team for updates.

As for your second query, let’s understand it with the help of the Decoder Block only. For now, let’s assume that the inconsistency has been made consistent, by replacing the MaxPool2D layer with the Conv2D Transpose layer, as it will be done.

Here, in the unet_model, when we first use the upsampling_block, the inputs that are supposed to be fed to the upsampling block are cblock5[0] (Bottleneck output) and cblock4[1](Block 4 output). So, first we need to perform Transpose Convolution on cblock5[0], then we need to merge the output of that with cblock4[1], and then we need to apply Conv2D twice. So, the actual order is as follows:
Conv2D Transpose -> Concatenate -> Conv2D -> Conv2D.

If you look at Figure 4, this is the same order that they have followed:
Green (Conv2D Transpose) -> Concatenate -> Blue (Conv2D) -> Blue (Conv2D),
and this is the same order that we have coded in the upsampling_block. I hope this helps.