Hi,

In Week 2, Quiz question, there is the below question:

*{moderator edit - quiz question and answers removed}*

why does my answer incorrect?, And what is the correct answer?

Hi,

In Week 2, Quiz question, there is the below question:

*{moderator edit - quiz question and answers removed}*

why does my answer incorrect?, And what is the correct answer?

It would be cheating for us to tell you the right answer. But the reason that your answer is wrong is that minibatch has *more* overhead than full batch: there is another â€śinnerâ€ť loop over the minibatches and you get less benefit from the vectorization since it is applied to smaller objects. So each â€śepochâ€ť is actually more *expensive* in terms of the total compute cost. But what you hope is that youâ€™ll end up needing fewer total epochs in order to get good convergence, because the weights get updated after each minibatch.

Itâ€™s been a while since I listened to the lectures here, but I would bet that Prof Ng discussed exactly this point that I just made in the lectures. If what I said above didnâ€™t â€ścomputeâ€ť for you, you might want to go back and scan the transcript of the relevant lectures and see what Prof Ng says on this point.

Thank you for your response; Iâ€™m close to grasping the idea.

Great! The point of minibatch GD is that you get faster convergence, at the expense of higher compute costs. Think of the minibatch size as the â€śknobâ€ť that you can turn to modify the performance. At one end of the scale, you have full batch gradient descent and at the other end you have â€śStochastic Gradient Descentâ€ť where the minibatch size = 1. So the smaller the minibatch size, the higher the compute cost, but the faster the convergence. But if you go all the way to the limit of batch size 1, then you also have the maximal compute cost (no benefit at all from vectorization) and the maximum amount of statistical noise in the updates: they may bounce all over the place since each one only depends on the behavior for one sample and you get no â€śsmoothingâ€ť at all from any averaging. So the goal is to find the â€śGoldilocksâ€ť point at which you get the fastest convergence at the minimum cost. They have done some very large and careful studies of this across lots of different systems and the conclusion is that Yann LeCun had it right in his famous quote: â€śFriends donâ€™t let friends use minibatch sizes greater than 32â€ť. In almost all cases, the optimal size was somewhere between 1 and 32.

1 Like

Great! I got this part, this added alot to my understanding.

And i knew the answer, I was confuced with the term â€śmini_batch size is the same as training sizeâ€ť, i was thinking the number of mini-batches is as the number of the training examples.

But I am lucky to have this conversation with you, and so the nice information you provided.