Theemerging field of Explainable AI (XAI) resulted from the growing understanding of the importance of explainability in the AI community. The importance of responsible and trustworthy AI has been asserted in several established national and international legal frameworks. In April 2021, the first-ever legal framework on AI was proposed by the European Commission. The proposal outlines new regulations to ensure that AI systems are developed and applied in a responsible and trustworthy manner. This regulation is expected to become applicable in the second half of 2024. Having explainable AI systems is an essential cornerstone in reaching trustworthy and responsible AI. XAI in NLP-based AI systems can be defined as methods and models that make the predictions and behavior of these systems understandable to humans. A model that explains its output facilitates debugging, auditing, and detecting bias in these systems. As academic research on explainability methods and techniques continues to grow and new techniques emerge in the literature, questions remain about how to best leverage these methods to provide optimal explanations for various stakeholders. Numerous questions about the multiple aspects of state-of-the-art explainability methods remain unanswered. As new methods emerge, addressing these issues becomes vital for optimal adoption and utilization in real-world applications.
This project investigates the current state of Explainable AI for NLP-based AI systems in academic literature and practical applications. Its goal is to assist various stakeholders in understanding the decisions and predictions made by such systems based on their needs. For developers, this could be debugging a system, while for end users, it could be explaining the decisions to have trust in using it. The project aims to build an interactive framework where different explainability methods are identified and implemented to provide different explanations that could complement each other in delivering fine-grained explanations and analysis of the NP-based AI systems’ decisions and behavior for various stakeholders.
The project is done in collaboration with Software AG. The project is part of the Software Campus Framework and sponsored by the Federal Ministry of Education and Research (BMBF).