Why do we normalize the image pixels using min-max normalization instead of normalizing to have zero mean and unit variance?

Thanks for the help.

– Joel

Why do we normalize the image pixels using min-max normalization instead of normalizing to have zero mean and unit variance?

Thanks for the help.

– Joel

Hi, @gato.

Sorry for the late reply.

Someone with more experience in image processing may offer a better insight, but I’d say both approaches are valid. Min-max scaling seems easier to compute, since we know the range of pixel values.

Hope you’re enjoying the course

Wow thanks for the reply. I forgot I asked this way back when. For some context, I asked my question originally because I lack a background in statistics. Not in math, so I can put the steps together. But reasoning the *why* about the steps sometimes escapes me.

Thanks for that reply it makes sense: “Because the information and context available makes it possible.”

To add to that answer, in case someone else has my same itch, at some point down the road I ran into this: Feature Scaling: Standardization vs. Normalization And Various Types of Normalization | Minkyung’s blog

It covers feature scaling in general and why sometimes you do normalization, standardization, etc.

They’re all tools. And like all tools they should make sense in the context of your domain.

Thanks again for the reply!

Cheers,

– Joel

1 Like