Practical Implementation of U-Net Filters

General question on UNet filters. Am I correct that the individual numbers & arrangements of each element within each chosen UNet Filter are parameters to be learnt OR are these numbers & arrangements user defined depending on what features the user wants to pick up?

If I am understanding your question correctly, the answer is the same as it always is for neural networks: the architecture of the network (how many layers, how many units in each layer, all the convolution values like filter size, padding and stride, how many pooling layers …) is all chosen (hyperparameters). But the actual values at each position in the various filters specified by the architecture (the “parameters”) are learned through back propagation.

1 Like

Thanks very much Paul. So for the case of U-Net filters on transpose convolution. This is also learnt via back propagation! Separately, it seems it’s the skip connections that enrich back the spatial features previously lost when downsampling.

Yes, any parameters (coefficients) for the transpose convolutions are also learned by back propagation.

That is a good description of the purpose of the skip connections. Note that the architecture of the skip connections is also a choice made by the system designer. The skip connections themselves do not have any parameters, but they are part of the back propagation process: gradients are propagated across all connections in the model.

Tks Paul for the clarification! I will play around with the optional parts of the assignments as the backprops are not a prerequisite for most assignments. Hence, why I might have missed the reinforcement of key concepts.

Well since we have now switched to using TF and Keras, we no longer have to worry about back propagation: the packages just handle all the gradient calculations and the application of the gradients and convergence and all that invisibly for you. You don’t even have to specify a “learning rate” or worry about any of that.