I’m struggling to create a light YoloV1 (with only one bounding box) on MNIST dataset (I randomly paste 28x28 digit into a 75x75 black background).
I can’t figure out how to turn relative-to-cell coordinates into absolute coordinates.
Since now, I’m using the groundtruth bounding boxes to retrieve the cell which should contain an object, then I save the i,j positions, then I use those positions to get back to absolute coordinates with my predictions.
This method works but when it’s time to detect a real image, I won’t have the groundtruth coordinates and so, the i,j object position, and so the absolute position of the predicted bounding box.
Could someone help me ?