C3W3 U-net conv2d block for loop

1:13 in the video

The conv2d_blocks have two convolutional layers which are created by for loops. Is this possible because the model is stored as the Model object separately from the conv2d_block code? Doesn’t the garbage collector delete the memory allocated to the for loop after the loop is finished?

As far as I understand, when the model is called, it calls the conv2d_block function, and those conv2d blocks are created at that moment, part of the model.

1 Like

if you notice the for loop indentation is inside the def statement allowing the iterative custom computations.

When a for loop is placed inside a def statement, which defines a function, it operates on the input tensor passed to that function.

The for loop iterates over the tensor’s elements or dimensions, based on how it’s structured within the function.

1 Like