Reading images with cv2

During development we used: img = cv2.imread(img_filepath) to read the image into a numpy array

Whereas in deployment we used:

image_stream = io.BytesIO(  
file_bytes = np.asarray(bytearray(, dtype=np.uint8)

to read image into bytes and later bytes to numpy array.

I was wondering why we didn’t read the image into numpy array directly in the deployment code? Why do we have to read to bytes first?

My guess is that the deployment (or production) code has to account for the fact that the image data comes from a part of the API that gets it from the user via a GUI, for instance. In that case, the data ingested is a stream of bytes.

Thank you. makes perfect sense