Course2: week2: Preprocessing Data at Scale

Time: 9.12 min

As the tutor mentioned about feature transformation in batches. I have questions about that,

  1. Let’s say we are normalising the specific feature.
  2. for the first batch, it generates statistical values like min, max to do normalisation.
  3. in the second batch, again it generates another set of min and max value which is completely different from the first batch. Do you think will that be consistent whilst we are trying to transform the data during inference?

correct me if I’m wrong. I wasn’t clear on this point it would be great if you can clarify my doubts

Thanks in advance :slight_smile:

consider that we’re talking here about data preprocessing at large scale. Even TB of data.
Sometimes the needed constants for normalization (eg: mean…) are not computed on the entire training dataset. You compute them on a “sample”. A batch that must be large enough to be representative of the entire population, in order to have the same mean and std.
I think this is the general idea.
Not to be confused with a “batch normalization layer” inside a NN. That is a completely different thing, for other purposes.

Hi @luigisaetta

thanks very much :slight_smile: now I got it.