Minimizing Factual Inconsistency and Hallucination in Large Language Models

    November 2023 in “ ArXiv.org
    I Muneeswaran, Shreya Saxena, Siva Prasad, M. Prakash, Advaith Shankar, V Varun, Vishal Vaddina, Saisubramaniam Gopalakrishnan
    Image of study
    The paper presents a multi-stage framework to reduce factual inconsistency and hallucination in large language models (LLMs), enhancing their transparency and accuracy. This framework involves generating, verifying, and refining rationales to produce accurate answers, significantly improving the performance of models like OpenAI GPT-3.5-turbo by 14-25% in faithfulness and 16-22% in accuracy. Fine-tuning smaller LLMs with this method increases their accuracy by 33-42%, enabling them to compete with commercial models. The framework's effectiveness is demonstrated in the life sciences industry, particularly in pharmacovigilance, and shows potential for applications in Legal, Finance, and Education sectors. The RAG+FE framework achieves up to 96.85% accuracy on the AEQA dataset, outperforming baseline methods.
    Discuss this study in the Community →