IDEAS home Printed from https://ideas.repec.org/p/ehl/lserod/126674.html
   My bibliography  Save this paper

Benchmarking OpenAI's APIs and other Large Language Models for repeatable and efficient question answering across multiple documents

Author

Listed:
  • Filipovska, Elena
  • Mladenovska, Ana
  • Bajrami, Merxhan
  • Dobreva, Jovana
  • Hillman, Velislava
  • Lameski, Petre
  • Zdravevski, Eftim

Abstract

The rapid growth of document volumes and complexity in various domains necessitates advanced automated methods to enhance the efficiency and accuracy of information extraction and analysis. This paper aims to evaluate the efficiency and repeatability of OpenAI's APIs and other Large Language Models (LLMs) in automating question-answering tasks across multiple documents, specifically focusing on analyzing Data Privacy Policy (DPP) documents of selected EdTech providers. We test how well these models perform on large-scale text processing tasks using the OpenAI's LLM models (GPT 3.5 Turbo, GPT 4, GPT 4o) and APIs in several frameworks: direct API calls (i.e., one-shot learning), LangChain, and Retrieval Augmented Generation (RAG) systems. We also evaluate a local deployment of quantized versions (with FAISS) of LLM models (Llama-2-13B-chat-GPTQ). Through systematic evaluation against predefined use cases and a range of metrics, including response format, execution time, and cost, our study aims to provide insights into the optimal practices for document analysis. Our findings demonstrate that using OpenAI's LLMs via API calls is a workable workaround for accelerating document analysis when using a local GPU-powered infrastructure is not a viable solution, particularly for long texts. On the other hand, the local deployment is quite valuable for maintaining the data within the private infrastructure. Our findings show that the quantized models retain substantial relevance even with fewer parameters than ChatGPT and do not impose processing restrictions on the number of tokens. This study offers insights on maximizing the use of LLMs for better efficiency and data governance in addition to confirming their usefulness in improving document analysis procedures.

Suggested Citation

  • Filipovska, Elena & Mladenovska, Ana & Bajrami, Merxhan & Dobreva, Jovana & Hillman, Velislava & Lameski, Petre & Zdravevski, Eftim, 2024. "Benchmarking OpenAI's APIs and other Large Language Models for repeatable and efficient question answering across multiple documents," LSE Research Online Documents on Economics 126674, London School of Economics and Political Science, LSE Library.
  • Handle: RePEc:ehl:lserod:126674
    as

    Download full text from publisher

    File URL: http://eprints.lse.ac.uk/126674/
    File Function: Open access version.
    Download Restriction: no
    ---><---

    More about this item

    Keywords

    few-shot learning Q&A; GPT; LangChain; Large Language Models; Llama; LLM; multi-document; one-shot learning; OpenAI; QA; RAG;
    All these keywords.

    JEL classification:

    • J50 - Labor and Demographic Economics - - Labor-Management Relations, Trade Unions, and Collective Bargaining - - - General

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:ehl:lserod:126674. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: LSERO Manager (email available below). General contact details of provider: https://edirc.repec.org/data/lsepsuk.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.