IDEAS home Printed from https://ideas.repec.org/a/bhx/ojijhs/v7y2024i3p1-11id1862.html
   My bibliography  Save this article

Understanding and Addressing AI Hallucinations in Healthcare and Life Sciences

Author

Listed:
  • Aditya Gadiko

Abstract

Purpose: This paper investigates the phenomenon of "AI hallucinations" in healthcare and life sciences, where large language models (LLMs) produce outputs that, while coherent, are factually incorrect, irrelevant, or misleading. Understanding and mitigating such errors is critical given the high stakes of accurate and reliable information in healthcare and life sciences. We classify hallucinations into three types input-conflicting, context-conflicting, and fact-conflicting and examine their implications through real-world cases. Methodology: Our methodology combines the Fact Score, Med-HALT, and adversarial testing to evaluate the fidelity of AI outputs. We propose several mitigation strategies, including Retrieval-Augmented Generation (RAG), Chain-of-Verification (CoVe), and Human-in-the-Loop (HITL) systems, to enhance model reliability. Findings: As artificial intelligence continues to permeate various sectors of society, the issue of hallucinations in AI-generated text poses significant challenges, especially in contexts where precision and reliability are paramount. This paper has delineated the types of hallucinations commonly observed in AI systems input-conflicting, context-conflicting, and fact-conflicting and highlighted their potential to undermine trust and efficacy in critical domains such as healthcare and legal proceedings. Unique contribution to theory, policy and practice: This study's unique contribution lies in its comprehensive analysis of AI hallucinations' types and impacts and the development of robust controls that advance theoretical understanding, practical application, and policy formulation in AI deployment. These efforts aim to foster safer, more effective AI integration across healthcare and life sciences sectors

Suggested Citation

  • Aditya Gadiko, 2024. "Understanding and Addressing AI Hallucinations in Healthcare and Life Sciences," International Journal of Health Sciences, CARI Journals Limited, vol. 7(3), pages 1-11.
  • Handle: RePEc:bhx:ojijhs:v:7:y:2024:i:3:p:1-11:id:1862
    as

    Download full text from publisher

    File URL: https://carijournals.org/journals/index.php/IJHS/article/view/1862/2238
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:bhx:ojijhs:v:7:y:2024:i:3:p:1-11:id:1862. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chief Editor (email available below). General contact details of provider: https://www.carijournals.org/journals/index.php/IJHS/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.