As I understand, the box confidence is calculated according to the IoU between the bounding-box and the ground-truth in the training phase, so, how to calculate the confidence for new test images ( we don’t have a ground-truth here)???
Hi @ayman3000, I am not sure whether I interpret your question correctly, but both the training set and test set have labeled items (with a ground-truth), otherwise you can not train the model on the training set and validate its accuracy on the test set. If both training and test verification are sufficiently accurate, you are going to deploy in a new situation where you do not (always) have a ground truth to verify. Hope that answers your question.
Can you clarify what you mean by box confidence? I think it is more common to see the word accuracy associated with box. And maybe what video or exercise prompted the question (eg is it related to YOLO)