C1_W3_Assignment, how to preprocess the whole dataset?

Hi,
I want to train the whole dataset on my machine, but I got some questions about the preprocessing of the dataset.

  1. We got an X_norm and corresponding y after inputting the original image and label. Do we output X_norm and y in one h5py file?
  2. How many patches should we get for a case?
    Thanks!

Hey @Bowen_Zheng
When preprocessing the dataset for patch-based classification, you can store the normalized patches and corresponding labels as separate arrays in a single HDF5 file.

To do this, you can first create an empty HDF5 file using h5py.File(). Then, you can create two datasets within the file using create_dataset() - one for the normalized patches and one for the corresponding labels. Finally, you can write the normalized patches and labels to the datasets using write() or write_direct().

Regarding the number of patches you should extract per case, it depends on the size of the original images and the desired patch size. In general, you should aim to extract enough patches to capture the relevant features in the image while avoiding redundancy.

A common approach is to extract patches randomly from the image with a specified patch size and stride. The number of patches per image can then be calculated as (image_size - patch_size) / stride + 1. You may need to adjust the patch size and stride depending on the specific problem and dataset.

It’s also a good practice to extract patches from multiple scales to capture different levels of details. You can achieve this by resizing the image to different scales and extracting patches at each scale.