to understand you correctly: with re-partitioning do you mean some kind of:
- cross validation like this: 3.1. Cross-validation: evaluating estimator performance — scikit-learn 1.2.1 documentation here my question would be: which model you are going to deploy and why?
- or just random shuffling and using a new train / dev / test set ?
As long as you have solid evidence to assume that your test set is quite realistic and representative to what the model will see in reality and the test set is new in terms of you did not incorporate it into training or tuning features or hyperparameters, this should be fine.
Best regards
Christian