Back to top

Bachelor's Thesis Roksoliana Rabets

Last modified May 14
   No tags assigned

Investigating the Status Quo of Explainable NLP in Practice

Abstract:

With the ever-increasing adoption of black-box AI models, the lack of interpretation of their output remains a significant concern. The inability to interpret the output of these models poses a risk to their adoption and raises trust issues among end-users who prefer models that can explain their output. Another concern is the rising usage of personal data during decision-making, which is being regulated by the GDPR. This means all of the decisions made by AI should be explainable to practice the person's "right for the explanation". 

To address the concerns of adopting a black-box model that doesn’t explain its predictions, different research papers propose various approaches to explain the behavior and the predictions of such models and systems and evaluate the quality of explanations. With substantial efforts to present different explainability approaches in academic papers, investigating the level and state of adoption of such methods in NLP use cases is still to be studied.

Therefore, this thesis investigates the current state of applying Explainable AI approaches for NLP-based AI systems in practice. At first, we will look into the state of the research on explainable NLP in general and consequently will identify possible factors that could influence the adoption of explainable NLP in practice. Secondly, by conducting semi-structured interviews with practitioners and developers we will measure the extent to which explainable NLP was adopted in industry and identify success factors and challenges encountered during adoption.

Research Questions:

1. What are the challenges and misconceptions in the current XAI research that may influence the adoption of explainability techniques for NLP use cases in practice?

2. What is the degree of explainability techniques adoption in practice, and which methods are used in NLP use cases?

3. What are the challenges during the adoption of explainability techniques for NLP use cases in practice, and which success factors can be identified?