IDEAS home Printed from https://ideas.repec.org/a/gam/jftint/v15y2023i12p375-d1286429.html
   My bibliography  Save this article

A Structured Narrative Prompt for Prompting Narratives from Large Language Models: Sentiment Assessment of ChatGPT-Generated Narratives and Real Tweets

Author

Listed:
  • Christopher J. Lynch

    (Virginia, Modeling, Analysis, and Simulation Center, Old Dominion University, 1030 University Blvd., Suffolk, VA 23435, USA)

  • Erik J. Jensen

    (Computational Modeling and Simulation Engineering Department, Old Dominion University, Norfolk, VA 23508, USA)

  • Virginia Zamponi

    (Virginia, Modeling, Analysis, and Simulation Center, Old Dominion University, 1030 University Blvd., Suffolk, VA 23435, USA)

  • Kevin O’Brien

    (Virginia, Modeling, Analysis, and Simulation Center, Old Dominion University, 1030 University Blvd., Suffolk, VA 23435, USA)

  • Erika Frydenlund

    (Virginia, Modeling, Analysis, and Simulation Center, Old Dominion University, 1030 University Blvd., Suffolk, VA 23435, USA)

  • Ross Gore

    (Virginia, Modeling, Analysis, and Simulation Center, Old Dominion University, 1030 University Blvd., Suffolk, VA 23435, USA)

Abstract

Large language models (LLMs) excel in providing natural language responses that sound authoritative, reflect knowledge of the context area, and can present from a range of varied perspectives. Agent-based models and simulations consist of simulated agents that interact within a simulated environment to explore societal, social, and ethical, among other, problems. Simulated agents generate large volumes of data and discerning useful and relevant content is an onerous task. LLMs can help in communicating agents’ perspectives on key life events by providing natural language narratives. However, these narratives should be factual, transparent, and reproducible. Therefore, we present a structured narrative prompt for sending queries to LLMs, we experiment with the narrative generation process using OpenAI’s ChatGPT, and we assess statistically significant differences across 11 Positive and Negative Affect Schedule (PANAS) sentiment levels between the generated narratives and real tweets using chi-squared tests and Fisher’s exact tests. The narrative prompt structure effectively yields narratives with the desired components from ChatGPT. In four out of forty-four categories, ChatGPT generated narratives which have sentiment scores that were not discernibly different, in terms of statistical significance (alpha level α = 0.05 ), from the sentiment expressed in real tweets. Three outcomes are provided: (1) a list of benefits and challenges for LLMs in narrative generation; (2) a structured prompt for requesting narratives of an LLM chatbot based on simulated agents’ information; (3) an assessment of statistical significance in the sentiment prevalence of the generated narratives compared to real tweets. This indicates significant promise in the utilization of LLMs for helping to connect a simulated agent’s experiences with real people.

Suggested Citation

  • Christopher J. Lynch & Erik J. Jensen & Virginia Zamponi & Kevin O’Brien & Erika Frydenlund & Ross Gore, 2023. "A Structured Narrative Prompt for Prompting Narratives from Large Language Models: Sentiment Assessment of ChatGPT-Generated Narratives and Real Tweets," Future Internet, MDPI, vol. 15(12), pages 1-36, November.
  • Handle: RePEc:gam:jftint:v:15:y:2023:i:12:p:375-:d:1286429
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1999-5903/15/12/375/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1999-5903/15/12/375/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Keiki Takadama & Tetsuro Kawai & Yuhsuke Koyama, 2008. "Micro- and Macro-Level Validation in Agent-Based Simulation: Reproduction of Human-Like Behaviors and Thinking in a Sequential Bargaining Game," Journal of Artificial Societies and Social Simulation, Journal of Artificial Societies and Social Simulation, vol. 11(2), pages 1-9.
    2. Edisa Lozić & Benjamin Štular, 2023. "Fluent but Not Factual: A Comparative Analysis of ChatGPT and Other AI Chatbots’ Proficiency and Originality in Scientific Writing for Humanities," Future Internet, MDPI, vol. 15(10), pages 1-26, October.
    3. Zoltán Szabó & Vilmos Bilicki, 2023. "A New Approach to Web Application Security: Utilizing GPT Language Models for Source Code Inspection," Future Internet, MDPI, vol. 15(10), pages 1-27, September.
    4. Robert Axelrod, 1997. "Advancing the Art of Simulation in the Social Sciences," Working Papers 97-05-048, Santa Fe Institute.
    5. Daniel Kornhauser & Uri Wilensky & William Rand, 2009. "Design Guidelines for Agent Based Model Visualization," Journal of Artificial Societies and Social Simulation, Journal of Artificial Societies and Social Simulation, vol. 12(2), pages 1-1.
    6. Spyros Makridakis & Fotios Petropoulos & Yanfei Kang, 2023. "Large Language Models: Their Success and Impact," Forecasting, MDPI, vol. 5(3), pages 1-14, August.
    7. Eva A. M. van Dis & Johan Bollen & Willem Zuidema & Robert van Rooij & Claudi L. Bockting, 2023. "ChatGPT: five priorities for research," Nature, Nature, vol. 614(7947), pages 224-226, February.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Chiarello, Filippo & Giordano, Vito & Spada, Irene & Barandoni, Simone & Fantoni, Gualtiero, 2024. "Future applications of generative large language models: A data-driven case study on ChatGPT," Technovation, Elsevier, vol. 133(C).
    2. Matteo Richiardi, 2003. "The Promises and Perils of Agent-Based Computational Economics," LABORatorio R. Revelli Working Papers Series 29, LABORatorio R. Revelli, Centre for Employment Studies.
    3. Ghio, Alessandro, 2024. "Democratizing academic research with Artificial Intelligence: The misleading case of language," CRITICAL PERSPECTIVES ON ACCOUNTING, Elsevier, vol. 98(C).
    4. Johannes Dahlke & Kristina Bogner & Matthias Mueller & Thomas Berger & Andreas Pyka & Bernd Ebersberger, 2020. "Is the Juice Worth the Squeeze? Machine Learning (ML) In and For Agent-Based Modelling (ABM)," Papers 2003.11985, arXiv.org.
    5. Ching-Nam Hang & Pei-Duo Yu & Roberto Morabito & Chee-Wei Tan, 2024. "Large Language Models Meet Next-Generation Networking Technologies: A Review," Future Internet, MDPI, vol. 16(10), pages 1-29, October.
    6. Evangelos Katsamakas & Oleg V. Pavlov & Ryan Saklad, 2024. "Artificial intelligence and the transformation of higher education institutions," Papers 2402.08143, arXiv.org.
    7. Günter Küppers & Johannes Lenhard, 2005. "Validation of Simulation: Patterns in the Social and Natural Sciences," Journal of Artificial Societies and Social Simulation, Journal of Artificial Societies and Social Simulation, vol. 8(4), pages 1-3.
    8. Andrew W. Bausch, 2014. "Evolving intergroup cooperation," Computational and Mathematical Organization Theory, Springer, vol. 20(4), pages 369-393, December.
    9. Ulfia A. Lenfers & Julius Weyl & Thomas Clemen, 2018. "Firewood Collection in South Africa: Adaptive Behavior in Social-Ecological Models," Land, MDPI, vol. 7(3), pages 1-17, August.
    10. Hedström, Peter & Wennberg, Karl, 2016. "Causal Mechanisms in Organization and Innovation Studies," Ratio Working Papers 284, The Ratio Institute.
    11. Giannoccaro, Ilaria, 2015. "Adaptive supply chains in industrial districts: A complexity science approach focused on learning," International Journal of Production Economics, Elsevier, vol. 170(PB), pages 576-589.
    12. Flaminio Squazzoni, 2010. "The impact of agent-based models in the social sciences after 15 years of incursions," History of Economic Ideas, Fabrizio Serra Editore, Pisa - Roma, vol. 18(2), pages 197-234.
    13. Pietro Terna, 2000. "Sum: A Surprising (Un)Realistic Market - Building A Simple Stock Market Structure With Swarm," Computing in Economics and Finance 2000 173, Society for Computational Economics.
    14. Sakhi Aggrawal & Alejandra J. Magana, 2024. "Teamwork Conflict Management Training and Conflict Resolution Practice via Large Language Models," Future Internet, MDPI, vol. 16(5), pages 1-25, May.
    15. Emilio Ferrara, 2024. "GenAI against humanity: nefarious applications of generative artificial intelligence and large language models," Journal of Computational Social Science, Springer, vol. 7(1), pages 549-569, April.
    16. Alin ZAMFIROIU & Denisa VASILE & Daniel SAVU, 2023. "ChatGPT – A Systematic Review of Published Research Papers," Informatica Economica, Academy of Economic Studies - Bucharest, Romania, vol. 27(1), pages 5-16.
    17. Newbery, Robert & Lean, Jonathan & Moizer, Jonathan & Haddoud, Mohamed, 2018. "Entrepreneurial identity formation during the initial entrepreneurial experience: The influence of simulation feedback and existing identity," Journal of Business Research, Elsevier, vol. 85(C), pages 51-59.
    18. Thitithep Sitthiyot, 2021. "Macroeconomic and financial management in an uncertain world: What can we learn from complexity science?," Papers 2112.15294, arXiv.org.
    19. Lilian N. Alessa & Melinda Laituri & C. Michael Barton, 2006. "An "All Hands" Call to the Social Science Community: Establishing a Community Framework for Complexity Modeling Using Agent Based Models and Cyberinfrastructure," Journal of Artificial Societies and Social Simulation, Journal of Artificial Societies and Social Simulation, vol. 9(4), pages 1-6.
    20. Shahad Al-Khalifa & Fatima Alhumaidhi & Hind Alotaibi & Hend S. Al-Khalifa, 2023. "ChatGPT across Arabic Twitter: A Study of Topics, Sentiments, and Sarcasm," Data, MDPI, vol. 8(11), pages 1-19, November.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jftint:v:15:y:2023:i:12:p:375-:d:1286429. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.