Normalization vs Standardization image - Explain the difference.

521    Asked by dipesh_9001 in Data Science , Asked on Feb 15, 2023

For day and night image classification, is it better to normalize or standardize images? In general, when should I use each method? I am interested in an example why one method is preferred over another one?


Here, by normalization, I mean dividing pixel values by 255. Standardization means subtracting the mean value of pixels and then dividing by standard deviation. See the code samples below


# normalization
datagen = ImageDataGenerator(rescale=1.0/255.0)
or
# stardartization
datagen = ImageDataGenerator(samplewise_center=True, samplewise_std_normalization=True)
Answered by David Edmunds
Normalization: normalized_value=(raw_value-min)/(max-min)

Standardization image: standarized_value=(raw_value-μ)/σ

Also, one additional famous method is there which is centering. Where you subtract the mean from the pixels.

In general, all of these serve one common and crucial purpose: to provide "fair conditions" for all features. I.o.w. Since, for each feature, we have many shared parameters (e.g. learning rate) instead of custom parameters, we need to have similar ranges. Therefore, sharing of parameters would not be optimal because a parameter (e.g. learning rate) would not have the same power over features with different ranges. For example, converging to the global optimum value might be effective for one of the features, however, another one might require a smaller learning rate value because of its range.

Additionally, smaller values increase the calculation process, fastens the convergence. Also, interpretation and comparison of weights would be easier if the features are standard. If ranges are dissimilar one of the coefficients might be very big while another one might be comparably very small.

However, I want to draw attention to your Normalization technique where you divide pixel values by 255, which just lowers the range from 0-255 to 0-1, but the spread of pixel values over the range is kept the same. Normalization is mostly used when we want to have an abounding range for features, however, RGB already has an abounding system of 0-255. However, theoretically, this would still help to fasten the training process.

It is hard to say that one of these (Normalization or Standardization) is better than the other because one might beat the other depending on the scenario. Commonly, both techniques are tried and compared to see which one performs better.

In many cases, to have a more accurate model, usage of these techniques is a must. Let's say you have two images where one of the images is dark where all pixels have values close to each other (narrow range, small deviations, high kurtosis), but another one is a complex image where parts of the image dissimilar to each other in terms of pixel values (wide range, large deviations, low kurtosis). In such a case, using normalization would help a model to better understand and capture the relations inside of the images.


Your Answer

Interviews

Parent Categories