Back to top

Abstractive Text Summarization for Domain-Specific Documents (ATESD)

Last modified by Anum Afzal Jun 5

Project Overview

 

With the exponential growth of digital textual data, organizations and researchers face the challenge of extracting valuable insights efficiently. Manual analysis by domain experts is costly and time-consuming, highlighting the need for automated solutions. Abstractive text summarization using cutting-edge NLP techniques offers a promising approach, enabling the automatic generation of concise summaries that capture the essence of complex documents like financial reports, medical research, and risk assessments.

This technology democratizes access to critical information, allowing non-experts to quickly understand key points and make informed decisions. However, domain-specific summarization presents challenges: specialized vocabulary often isn't well-represented in general models, requiring transfer learning or domain-specific training. Additionally, processing lengthy documents is limited by model input size, which recent Efficient Transformer architectures can address. Factual inaccuracies, or hallucinations, also pose risks; implementing fact-checking helps improve reliability.

Our project focuses on adapting transformer-based models for domain-specific summarization, overcoming input size limitations and reducing factual errors. The goal is to deliver accurate, scalable summaries that streamline information retrieval, reduce manual effort, and support smarter decision-making across various fields.

Research Objective

Despite the great performance of general language models on benchmark datasets for text summarization tasks, generating meaningful summaries for domain-specific documents is still challenging because of the need for expert knowledge in the field and other aspects such as model hallucination and input size limitations. The goal of this project is to provide improvements over current text summarization models that provide solutions to the research gaps addressed above.In general, the project focuses on the generation of concise summaries using Natural Language Processing based techniques while addressing the following research questions:

  • Would domain adaptation on general-purpose language models allow them to understand the underlying concepts of the new domain? 
  • How to adapt existing language models to ensure factual correctness in the text generated by the model?
  • How to overcome the input size limitation of the traditional language model without discarding meaningful data?

Research Questions

  • Can Efficient Transformer models encode text as effectively as the original Transformer models?
  • Are Efficient Transformer models able to remove the limitation on input size while ensuring linear time complexity?
  • Which improvements over existing model architecture would ensure factual correctness in the generated summaries?

Research Partner:

This project is part of the Software Campus Framework and fosters a research partnership with Holtzbrink Publishing Group 

Key Publications:

  • Afzal, A.; Vladika, J.; Braun, D.; Matthes, F.  Challenges in Domain-Specific Abstractive Summarization and How to Overcome Them. In Proceedings of the 15th International Conference on Agents and Artificial Intelligence (ICAART 2023), Lisbon, Portugal. SCITEPRESS - Science and Technology Publications.
  • Anum Afzal, Ribin Chalumattu, Florian Matthes, and Laura Mascarell. 2024. AdaptEval: Evaluating Large Language Models on Domain Adaptation for Text Summarization. In Proceedings of the 1st Workshop on Customizable NLP: Progress and Challenges in Customizing NLP for a Domain, Application, Group, or Individual (CustomNLP4U), pages 76–85, Miami, Florida, USA. Association for Computational Linguistics.


Sponsored by:

Software Campus 

Software Campus and Bavarian Ministry of Economic Affairs, Regional Development and Energy