How to chose Num of Hidden Units


From the above code snippet it’s visible that hiddden_dims are incraesing ,can someone please explain me this part

The idea is that as you go through the layers, more complexity is getting introduced one level at a time. Of course the output needs to be a full sized image of whatever the defined output resolution is. The whole point here is that we are literally generating a full synthetic image using some number of bits of random input, which is a pretty complex result. So we need a lot of “degrees of freedom” in order for that to be “interesting”, meaning looking real enough to fool both the discriminator and whoever the eventual consumer of the image is into thinking it’s real. But with that said, the exact choices of how many “expansion” layers you include and the degree of increase in the number of output neurons in each layer are “hyperparameters”, meaning choices that you need to make as the system designer, rather than “parameters” (e.g. the actual coefficients in each layer) that can be learned through training and back propagation.

The higher level message is that you can experiment with different configurations of those layers and see how that affects your results. They are just giving us this example, because the course developers have a lot of experience in designing this type of network and know from that experience what works.