Implementation of ResidualConvBlock

In the course on Stable diffusion,
the model is using a custom layer ResidualConvBlock
implemented in the file diffusion_utilities.py

the forward method has the following code :
”””
class ResidualConvBlock(nn.Module):
[…]
def forward(self, x: torch.Tensor) → torch.Tensor:
[…]
else:
# If not, apply a 1x1 convolutional layer to match dimensions before adding residual connection
shortcut = nn.Conv2d([…])
out = shortcut(x) + x2
”””

I have a question about the “shortcut”:
the layer shortcut is created randomly from scratch at each call of forward.

How does it make sense to create a random untrained layer at each call of forward and use it to make an inference ?

I was puzzled by the same thing. I believe it’s a bug. Somehow the rest of the model learns to how to deal with this noise in the data.