How to properly set jensen shannon divergence / infinity_norm to leverage TFDV drift/skew check features

The courses in this specialization shows how Tensorflow data validation offers the capability of checking data skew and drift and mention that “Setting the correct distance is typically an iterative process requiring domain knowledge and experimentation”

How can one specify an initial reasonable jensen_shannon_divergence threshold (or infinity_norm one for categorical features)? Is there some python package/utility/code that one can leverage on a given data set feature to compute a reasonable threshold?

If not, what is recommended to proper conduct the experiment and find the most appropriate threshold for a given feature in a dataset?

Why was jensen_shannon_divergence prioritized over other approaches to measure statistical distance?

A reasonable estimate depends on the acceptable variance of the model performance.

Some features aren’t as heavily weighted as the rest. Model performance won’t change much even if less important features change a bit. So, you can set the L-infinity norm threshold high.

On the other hand, it’s safe to retrain a model when an important feature changes a lot. So, set the infinity norm threshold low for valuable features.

Goal is to minimize compute resources i.e. retraining a model when not necessary. This is why the process is iterative and requires experimentation to figure out acceptable thresholds.