Experiment tracking: how to track features when iterating over data?

I find it pretty hard to keep track of features in training large deep learning models on structured data where there can be ~100 features. I’ve tried using MLflow which allows me to save a list of the features I used, but it’s fairly hard to read and compare between models when the list is long. Is there a better way or package to use to keep track of features when iterating over data?

1 Like

Hello @hweicodes and wellcome to the forum,

I have a tip (source) where you may find some valuable information about feature management.

2 Likes