Hello, I am using ssd-resnet50_640x640 and I change the dataset with some custom data (detecting boxes), to train and test. I am using exactly the same structure. Once i look at the results, the model is not capturing the entire box, only capturing the half of it. Should I change in the parameters? Or what should be the reason?
Hello is this related to the Zombie assignment? If so, you mean the bounding box output captures only half of the image that is supposed to detect! If that is so then I would suggest to pay special attention to annotation coordinates of the images used for learning and also it might be possible that the model is not learning well enough, might need to learn for a longer time or even larger dataset.
Y
Could detecting half of the object related to this? aspect_ratios?
Im not familiar with this part, but I guess you might need to play around with the settings here.