The advent of Large Language Models (LLMs) has significantly simplified various natural language processing tasks through simple prompts. However, these models encounter challenges with domain-specific knowledge. For instance, while they can suggest possible diseases based on symptoms, they are unable to provide precise medical prescriptions. Additionally, they often struggle to deliver accurate results for tasks requiring an in-depth understanding of specific domains, sometimes leading to hallucinations or inaccuracies.
To address these challenges, we employ parameter-efficient fine-tuning (PEFT) techniques to refine our LLM, using LLaMA3 for the task of summarization. Our approach involves adapting the model across four distinct domains: medical, legal, scientific, and news. We experiment with various PEFT technologies to enhance the model's performance. Our goal is to extend domain adaptation strategies to better comprehend low-resource domains. By fine-tuning the LLM on multiple high-resource domains, we aim to capture the intrinsic properties of language, thereby improving performance on various NLP tasks in low-resource domains. Specifically, this work focuses on the summarization task, aiming to generate high-quality and coherent summaries for low-resource domains.
Name | Type | Size | Last Modification | Last Editor |
---|---|---|---|---|
KickOff_DALLMs Kumawat.pptx | 3,86 MB | 19.02.2025 | ||
kickoff_dallms-pptx | 3,86 MB | 21.08.2024 | ||
MTFinal_DALLMs_10022025 Kumawat.pptx | 10,87 MB | 19.02.2025 | ||
MTFinal_DALLMs_10022025.pdf | 4,20 MB | 10.02.2025 | ||
TUM_MS_Thesis_MehulRajKumawat.pdf | 3,84 MB | 10.02.2025 |