Back to top

Guided Research Smarth Bakshi

Last modified Nov 15, 2023
   No tags assigned

A Survey of The State of Explainable AI for Text Summarization

 

Abstract

In the recent years there have been significant advancements in the quality of state-of-the-art black box models, whose internal logic and operations are opaque to the end user. Hence, these black box models are becoming less trustworthy, uninterpretable and biased, which results into both a practical and an ethical issue. This survey presents an overview of the current state of Explainable AI (XAI) for Text Summarization. We explore and categorize the various explainability techniques available for Text Summarization, as well as various ways how these techniques can be evaluated and visualised. We detail the operations and explainability techniques currently available for generating explanations for Text Summarization model predictions, to serve as a resource for model developers in the community and build trust and transparency among the users. Finally, we encourage the directions for future work in this important research area.

 

Research Questions

  • What XAI techniques are used for text summarization?
  • How are the explanation techniques for Text Summarization visualised?  
  • How are the explanations approaches for Text Summarization being evaluated?

Files and Subpages

Name Type Size Last Modification Last Editor