Large Language Models (LLMs) are playing an increasingly significant role in various sectors by assisting with complex language tasks, such as the automation of customer support interactions. With their advanced natural language processing capabilities, LLMs are able to understand and generate human-like responses, enabling companies to automate and streamline their customer support processes. By leveraging these models, businesses can scale their support operations, improve response times, and deliver consistent and personalized customer experiences.
The main goal of this project is to enhance the performance and functionality of an existing HR chatbot system for SAP using LLMs. The system aims to handle a large number of employee inquiries to the HR department by providing immediate responses based on internal HR policies. This will effectively reduce waiting times and alleviate the workload of HR experts.
Currently, the system uses custom fine-tuned language models (T5/LongT5) to generate responses for users. The objective is to replace this NLG module with more powerful LLMs that employ a more flexible inference approach using prompting and in-context learning, eliminating the need for a fine-tuned custom model. The existing fine-tuned Dense Passage Retriever (DPR) module, which retrieves HR articles relevant to the question, will be replaced by richer LLM embeddings and a vector database. This is expected to lead to improved retrieval accuracy and enhanced performance. Additionally, methods for enhacing the training data using LLMs are explored.
Name | Type | Size | Last Modification | Last Editor |
---|---|---|---|---|
Kick-off Presentation - Guided Research - Alexander Kowsik.pdf | 2,50 MB | 20.11.2023 Versions |