In CelebA GAN ungraded lab (C4_W4_Lab_3_CelebA_GAN_Experiments.ipynb), function load_celeba splits the dataset into two equal parts whereas train_on_batch(real_img1, real_img2) simply concatenates the two images that it gets from two different data set parts. So what’s the point on splitting?
Hello @karolis_uziela one reason I can think of they have done that is to create the dataset faster since the size and numerical range of images is reduced.
Also as far as I remember if you want to create a tf.dataset you need a pair maping: of input and output (x,y) and thats probably the reason why spliting into a pair, creating the dataset since it automates the process and then merging again.
1 Like