IDEAS home Printed from https://ideas.repec.org/p/chf/rpseri/rp2494.html
   My bibliography  Save this paper

Towards Faithful and Robust LLM Specialists for Evidence-Based Question-Answering

Author

Listed:
  • Tobias Schimanski

    (University of Zurich)

  • Jingwei Ni

    (ETH Zurich)

  • Mathias Kraus

    (University of Erlangen-Nuremberg-Friedrich Alexander Universität Erlangen Nürnberg)

  • Elliott Ash

    (ETH Zürich)

  • Markus Leippold

    (University of Zurich; Swiss Finance Institute)

Abstract

Advances towards more faithful and traceable answers of Large Language Models (LLMs) are crucial for various research and practical endeavors. One avenue in reaching this goal is basing the answers on reliable sources. However, this Evidence-Based QA has proven to work insufficiently with LLMs in terms of citing the correct sources (source quality) and truthfully representing the information within sources (answer attributability). In this work, we systematically investigate how to robustly fine-tune LLMs for better source quality and answer attributability. Specifically, we introduce a data generation pipeline with automated data quality filters, which can synthesize diversified high-quality training and testing data at scale. We further introduce four test sets to benchmark the robustness of fine-tuned specialist models. Extensive evaluation shows that fine-tuning on synthetic data improves performance on both in- and out-of-distribution. Furthermore, we show that data quality, which can be drastically improved by proposed quality filters, matters more than quantity in improving Evidence-Based QA.

Suggested Citation

  • Tobias Schimanski & Jingwei Ni & Mathias Kraus & Elliott Ash & Markus Leippold, 2024. "Towards Faithful and Robust LLM Specialists for Evidence-Based Question-Answering," Swiss Finance Institute Research Paper Series 24-94, Swiss Finance Institute.
  • Handle: RePEc:chf:rpseri:rp2494
    as

    Download full text from publisher

    File URL: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4728973
    Download Restriction: no
    ---><---

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:chf:rpseri:rp2494. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Ridima Mittal (email available below). General contact details of provider: https://edirc.repec.org/data/fameech.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.