I had an idea to build a model explainer that can potentially explain any model. The explainer explains a given model by estimating the contributive weight from each feature to the final prediction value. The contributive weight can be estimated even for non-linear model.
This idea is inspired from Riemann Sum, a numerical approximation of definite integral.
For me to explain in details, it requires some math. I wonder if this is the right platform to share, and if anyone is interested in this topic?
This is an interesting approach. Regarding XAI this repo might be relevant for you. Feel free to take a look:
Best regards
Christian
PiML seems to support only the explainability of 3 selected models.
I have written some notes of my idea here:
Gradient Tracking Explainer
With a demonstration of simple Python code at the end of the notes.
I may not have explained the idea clearly, let me any part of the notes is not clear or confusing.
I would like to hear comments and feedback from anyone who are interested in this topic.