SHAP Value Analysis on xLSTM German Wikipedia Model

I’m planning a comprehensive SHAP analysis and explainability on this xLSTM(based on only mLSTM) model: stefan-it/xlstm-german-wikipedia · Hugging Face
Main goals:
• Understand how the model makes predictions through feature attributions
• Explore how the mLSTM memory mechanism works under the hood
• Visualize what the model “pays attention to” when processing text
Any advice on the best approach to tackle this? Would appreciate suggestions on tools, methods, or workflows that work well for this kind of analysis.
Thanks!