Explainability in Data Centric AI is more possible (and/or imaginable) than in other Deep Learning fields (for example Computer Vision)
How much it should be stressed for the Data-centric projects and upcoming startups to have the clause of explainability (and successively its interpretability)?