# K-mean assignment doubt

After finding the top 𝐾=16�=16 colors to represent the image, you can now assign each pixel position to its closest centroid using the `find_closest_centroids` function.

• This allows you to represent the original image using the centroid assignments of each pixel.
• Notice that you have significantly reduced the number of bits that are required to describe the image.
• The original image required 24 bits (i.e. 8 bits x 3 channels in RGB encoding) for each one of the 128×128128×128 pixel locations, resulting in total size of 128×128×24=393,216128×128×24=393,216 bits.
• The new representation requires some overhead storage in form of a dictionary of 16 colors, each of which require 24 bits, but the image itself then only requires 4 bits per pixel location.
• The final number of bits used is therefore 16×24+128×128×4=65,92016×24+128×128×4=65,920 bits, which corresponds to compressing the original image by about a factor of 6.

Can someone explain me the last line calculation of total bits after compression

To recap:
The original image:

• 24 bits per pixel
• the image is 128 x 128 pixels.
• So the total amount of storage is 24 * 128 * 128 = 393,216 bits.

The compressed image:

• the color map has 16 centroids of 24 bits each, for 384 bits.
• the image is 128 x 128 pixels.
• each pixel is encoded as a 4-bit value (the index into the color map)
• The amount of storage for the image data is (4 * 128 * 128) = 65,536 bits
• The total amount of storage to represent the image is 65,536 + 384 = 65,920 bits.