I’m looking for a way to label images with a real number. I know how to label images for classification problems, when we put images into corresponding directories, and there is a limited number of directories. But I want to implement logistic regression and the label is a real number in a wide range. Could you please share what are the best practices for labeling images with real numbers which can be fractional?
Hi, @ptrd !
In this case, having one directory per class is obviously not viable. You can always try the classic solution of having all the images in a directory (train and test directories) with its corresponding label in a separate .txt file with same filename. You can then load into memory with a couple of tweaks in the dataloader.
I know this is an older thread, but I just read it and wanted to share my experience with a real life dataset for people reading later. The Berkeley driving data set ( https://www.bdd100k.com/ ) has one large JSON file that contains meta information about each image such as weather, type of scene (tunnel, residential, city street…), and time of day (day, night, dawn/dusk) plus a path to the image file. Additionally, it contains the values for all the labeled objects within the image. You crawl the JSON, parse the meta information, grab the image file name, then crawl the labels and extract the values you want such as object class name and bounding box coordinates. See Label Format — BDD100K documentation
Might make more sense to have one large JSON than separate files that contain only a single label value for each training image. Since the BDD dataset contains hundreds of thousands of images, this approach results in several orders of magnitude reduction in count of files.
Hi, @ai_curious !
Good to know! I can imagine a good reduction in disk reading time when loading the dataset, right?
My intuition is yes, runtime is faster, but I never went to the trouble to measure. Is the single JSON file for BDD large? Yes, it is ~ 1.4 GB. But I’d rather manage that on my storage than 100,000 separate files. Especially in the OP use case where there is a single floating point value as the label. Just seems like a lot to store a single value per file. Cheers
probably late for your previous request but hopefully helpful for the next time. btw I should have also mentioned that you don’t have to crawl the JSON every time…once you have parsed the text into Python objects, you can do something like this…
X_train , X_test , Y_train, Y_test = train_test_split( X , Y , test_size=0.1 )
np.save( os.path.join( output_dir , 'x_train.npy' ) , X_train )
np.save( os.path.join( output_dir , 'y_train.npy' ) , Y_train )
np.save( os.path.join( output_dir , 'x_test.npy' ) , X_test )
np.save( os.path.join( output_dir , 'y_test.npy' ) , Y_test )
.npy files are fast to load in again for training runs. Now you’ve got your column of floating points in a format that is fast to load and easy to directly feed into model training.