After finding the top 𝐾=16�=16 colors to represent the image, you can now assign each pixel position to its closest centroid using the find_closest_centroids
function.
- This allows you to represent the original image using the centroid assignments of each pixel.
- Notice that you have significantly reduced the number of bits that are required to describe the image.
- The original image required 24 bits (i.e. 8 bits x 3 channels in RGB encoding) for each one of the 128×128128×128 pixel locations, resulting in total size of 128×128×24=393,216128×128×24=393,216 bits.
- The new representation requires some overhead storage in form of a dictionary of 16 colors, each of which require 24 bits, but the image itself then only requires 4 bits per pixel location.
- The final number of bits used is therefore 16×24+128×128×4=65,92016×24+128×128×4=65,920 bits, which corresponds to compressing the original image by about a factor of 6.
Can someone explain me the last line calculation of total bits after compression