Back to top

Thesis on Investigating the Status Quo of Explainable NLP in Practice.

Last modified Oct 4, 2023
   No tags assigned

This position has been filled

Abstract: 

With the ever-increasing adoption of black-box AI models, the lack of interpretation of their output remains a significant concern. The inability to interpret the output of these models poses a risk to their adoption and raises trust issues among end-users who prefer models that can explain their output.

To address the concerns of adopting a black-box model that doesn’t explain its predictions, different research papers propose various approaches to explain the behavior and the predictions of such models and systems and evaluate the quality of explanations. With substantial efforts to present different explainability approaches in academic papers, investigating the level and state of adoption of such methods in NLP use cases is still to be studied.

Therefore, this thesis investigates the current state of applying Explainable AI approaches for NLP-based AI systems in practice. The thesis looks into the state of adoption, the challenges, and the success factors, as well as mapping the explainability techniques to NLP use cases. For this sake, the student will follow quantitative and qualitative methods by conducting semi-structured interviews and surveys and then synthesizing the knowledge gained to understand the state of practice of Explainable AI in NLP.

Application:

Students who are highly motivated and genuinely interested in the topic should send their CVs and transcript of records to mahdi.dhaini@tum.de. Ideally, the student should have NLP/ML background. Previous experience or courses related to Explainable AI is a big plus.

 

Files and Subpages

There are no subpages or files.