IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2412.12148.html
   My bibliography  Save this paper

How to Choose a Threshold for an Evaluation Metric for Large Language Models

Author

Listed:
  • Bhaskarjit Sarmah
  • Mingshu Li
  • Jingrao Lyu
  • Sebastian Frank
  • Nathalia Castellanos
  • Stefano Pasquali
  • Dhagash Mehta

Abstract

To ensure and monitor large language models (LLMs) reliably, various evaluation metrics have been proposed in the literature. However, there is little research on prescribing a methodology to identify a robust threshold on these metrics even though there are many serious implications of an incorrect choice of the thresholds during deployment of the LLMs. Translating the traditional model risk management (MRM) guidelines within regulated industries such as the financial industry, we propose a step-by-step recipe for picking a threshold for a given LLM evaluation metric. We emphasize that such a methodology should start with identifying the risks of the LLM application under consideration and risk tolerance of the stakeholders. We then propose concrete and statistically rigorous procedures to determine a threshold for the given LLM evaluation metric using available ground-truth data. As a concrete example to demonstrate the proposed methodology at work, we employ it on the Faithfulness metric, as implemented in various publicly available libraries, using the publicly available HaluBench dataset. We also lay a foundation for creating systematic approaches to select thresholds, not only for LLMs but for any GenAI applications.

Suggested Citation

  • Bhaskarjit Sarmah & Mingshu Li & Jingrao Lyu & Sebastian Frank & Nathalia Castellanos & Stefano Pasquali & Dhagash Mehta, 2024. "How to Choose a Threshold for an Evaluation Metric for Large Language Models," Papers 2412.12148, arXiv.org.
  • Handle: RePEc:arx:papers:2412.12148
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2412.12148
    File Function: Latest version
    Download Restriction: no
    ---><---

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2412.12148. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.