Hi, I want to ask about why the generator and discriminator have different number of layers in the example My First GAN.

Why didn’t we use the same set of layers and activation functions for both generator and discriminator.

**The generator layer is as follows**

```
> def get_generator_block(input_dim, output_dim):
> '''
> Function for returning a block of the generator's neural network
> given input and output dimensions.
> Parameters:
> input_dim: the dimension of the input vector, a scalar
> output_dim: the dimension of the output vector, a scalar
> Returns:
> a generator neural network layer, with a linear transformation
> followed by a batch normalization and then a relu activation
> '''
> return nn.Sequential(
> # Hint: Replace all of the "None" with the appropriate dimensions.
> # The documentation may be useful if you're less familiar with PyTorch:
> # https://pytorch.org/docs/stable/nn.html.
> #### START CODE HERE ####
> nn.Linear(input_dim, output_dim),
> nn.BatchNorm1d(output_dim),
> nn.ReLU(inplace=True)
> #### END CODE HERE ####
>
> )
```

**The discriminator layer is as follows**

```
def get_discriminator_block(input_dim, output_dim):
'''
Discriminator Block
Function for returning a neural network of the discriminator given input and output dimensions.
Parameters:
input_dim: the dimension of the input vector, a scalar
output_dim: the dimension of the output vector, a scalar
Returns:
a discriminator neural network layer, with a linear transformation
followed by an nn.LeakyReLU activation with negative slope of 0.2
(https://pytorch.org/docs/master/generated/torch.nn.LeakyReLU.html)
'''
return nn.Sequential(
#### START CODE HERE ####
nn.Linear(input_dim,output_dim),
nn.LeakyReLU(negative_slope = 0.2, inplace = True)
#### END CODE HERE ####
)
```