W1 lab 2, why do we tile/copy our data?

Hi, I’m having a bit of trouble understanding why we tile the data in lab2 (ie why is it beneficial to increase the training set size and reduce the number of training epochs + what does reducing the number of training epochs even mean) and I’ve been trying to figure it out on my own but I still don’t get it. If someone could explain or perhaps have a link in which the topic is covered I’d be very thankful.

1 Like

Hello @Theo2,

The number of epochs is the number of times your training dataset will be gone through by the model training process. If # epochs is 10, then the training process will train the model with your dataset for 10 times.

Without tile, and if # epochs is 10, then the dataset will be used 10 times.
With tiling the dataset 1000 times, and if # epochs is 10, then the dataset will be used 1000*10 = 10000times.
Without tile, to use the dataset 10000 times, you will need to set the # epochs to 10000.

The reason for tiling is to achieve more times of using the dataset in training process with less # epochs. One reason for using less # epochs is to save time, because in between 2 epochs we need to calculate the validation score once which takes some time. You can compare the time yourself.

Lastly, don’t take tiling samples as a rule or common practice, and justify it yourself if you want to use it, like if it can reduce a significant amount of time.

We have not discussed overfitting during the first week’s material. Is there a way to determine if a model is overfit? Visualzing? Or backtesting? Or some other metric?

Yes, this topic is covered later in the Specialization.