Abstract
Detecting and correcting hallucinations in text generated by large language models remains a challenging task today. Even the most advanced and powerful language models often struggle with this issue, and hundreds of approaches have been proposed to address it. This study will focus on post-hoc approaches, which aim to identify and correct hallucinations after the text has been generated.
While current post-hoc techniques demonstrate some success with short or one-sentence claims, they fall short regarding longer-form content. This research aims to analyze existing methods for reducing hallucinations in long-form claims and develop a taxonomy of hallucinations. The strengths and weaknesses of current approaches will be evaluated, and necessary enhancements will be made to improve their performance in long-form content.
Goal
This research aims to analyze and improve post-hoc approaches for detecting and correcting hallucinations in long-form text generation. The following three research questions will be addressed:
Name | Type | Size | Last Modification | Last Editor |
---|---|---|---|---|
Soydemir Thesis Kickoff.pdf | 1,81 MB | 15.05.2024 |