The project aims to build an interactive framework where different explainability methods are identified and implemented to provide different explanations that could complement each other in delivering fine-grained explanations and analysis of the NP-based AI systems’ decisions and behavior for various stakeholders.