Hello everyone,
In the “C2_W1_Lab02_CoffeeRoasting_TF” lab, we duplicate the data using the np.tile() function.
I don’t quite understand how useful this is.
When data has already been used for learning, why is it useful to use it again? What I mean is that once the data has been learned why have it be learned again?
Regards,
Pierre BEJIAN (from France)
Hey @Pierre_BEJIAN,
There has been an extensive discussion regarding this in the past. Please do check this thread out. Let me know if this helps.
Cheers,
Elemento
I read the thread, and I understand the relationship between copying the data and the epochs, but I still don’t know the answer to the question asked by @Pierre_BEJIAN.
Hey @Akshat_Jain5,
Welcome to the community.
I suppose this is pretty well-answered in the thread.
As for this, Gradient Descent or any other optimization algorithm is an iterative process, so the algorithm has to iterate over the dataset multiple times, so that the randomly initialized weights can be transformed into optimal weights. If you are confused about how optimization algorithms work, then my suggestion is to review the lecture videos once again. You can find lectures on Gradient Descent in Course 1, and also in Week 2 of Course 2.
If I think more about it, iterating over the data multiple times isn’t equivalent to using the data multiple times to learn, since “learn” here is an ambiguous word. If by “learn”, you are referring to 1% accuracy, then why even “learn” in the first place, we can just do random predictions, if by “learn”, we mean 10% accuracy, then even a single epoch might get us there. So, you see the way you define “learn” can change the number of epochs.
I hope this helps.
Cheers,
Elemento