IDEAS home Printed from https://ideas.repec.org/a/spr/jcsosc/v7y2024i1d10.1007_s42001-024-00250-1.html
   My bibliography  Save this article

GenAI against humanity: nefarious applications of generative artificial intelligence and large language models

Author

Listed:
  • Emilio Ferrara

    (University of Southern California)

Abstract

Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) are marvels of technology; celebrated for their prowess in natural language processing and multimodal content generation, they promise a transformative future. But as with all powerful tools, they come with their shadows. Picture living in a world where deepfakes are indistinguishable from reality, where synthetic identities orchestrate malicious campaigns, and where targeted misinformation or scams are crafted with unparalleled precision. Welcome to the darker side of GenAI applications. This article is not just a journey through the meanders of potential misuse of GenAI and LLMs, but also a call to recognize the urgency of the challenges ahead. As we navigate the seas of misinformation campaigns, malicious content generation, and the eerie creation of sophisticated malware, we’ll uncover the societal implications that ripple through the GenAI revolution we are witnessing. From AI-powered botnets on social media platforms to the unnerving potential of AI to generate fabricated identities, or alibis made of synthetic realities, the stakes have never been higher. The lines between the virtual and the real worlds are blurring, and the consequences of potential GenAI’s nefarious applications impact us all. This article serves both as a synthesis of rigorous research presented on the risks of GenAI and misuse of LLMs and as a thought-provoking vision of the different types of harmful GenAI applications we might encounter in the near future, and some ways we can prepare for them.

Suggested Citation

  • Emilio Ferrara, 2024. "GenAI against humanity: nefarious applications of generative artificial intelligence and large language models," Journal of Computational Social Science, Springer, vol. 7(1), pages 549-569, April.
  • Handle: RePEc:spr:jcsosc:v:7:y:2024:i:1:d:10.1007_s42001-024-00250-1
    DOI: 10.1007/s42001-024-00250-1
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s42001-024-00250-1
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s42001-024-00250-1?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Nils Köbis & Jean-François Bonnefon & Iyad Rahwan, 2021. "Bad machines corrupt good morals," Nature Human Behaviour, Nature, vol. 5(6), pages 679-685, June.
    2. María Agustina Ricci Lara & Rodrigo Echeveste & Enzo Ferrante, 2022. "Addressing fairness in artificial intelligence for medical imaging," Nature Communications, Nature, vol. 13(1), pages 1-6, December.
    3. Eva A. M. van Dis & Johan Bollen & Willem Zuidema & Robert van Rooij & Claudi L. Bockting, 2023. "ChatGPT: five priorities for research," Nature, Nature, vol. 614(7947), pages 224-226, February.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Evangelos Katsamakas & Oleg V. Pavlov & Ryan Saklad, 2024. "Artificial intelligence and the transformation of higher education institutions," Papers 2402.08143, arXiv.org.
    2. Elias Fernández Domingos & Inês Terrucha & Rémi Suchon & Jelena Grujić & Juan Burguillo & Francisco Santos & Tom Lenaerts, 2022. "Delegation to artificial agents fosters prosocial behaviors in the collective risk dilemma," Post-Print hal-04296038, HAL.
    3. Shahad Al-Khalifa & Fatima Alhumaidhi & Hind Alotaibi & Hend S. Al-Khalifa, 2023. "ChatGPT across Arabic Twitter: A Study of Topics, Sentiments, and Sarcasm," Data, MDPI, vol. 8(11), pages 1-19, November.
    4. Nur Ashfaraliana Abd Hadi & Faizah Mohamad & Elia Md Johar & Zaemah Abdul Kadir, 2024. "Exploring the Acceptance of ChatGPT as an Assisting Tool in Academic Writing among ESL Undergraduate Students," International Journal of Research and Innovation in Social Science, International Journal of Research and Innovation in Social Science (IJRISS), vol. 8(10), pages 2886-2901, October.
    5. Lukas Lanz & Roman Briker & Fabiola H. Gerpott, 2024. "Employees Adhere More to Unethical Instructions from Human Than AI Supervisors: Complementing Experimental Evidence with Machine Learning," Journal of Business Ethics, Springer, vol. 189(3), pages 625-646, January.
    6. Peres, Renana & Schreier, Martin & Schweidel, David & Sorescu, Alina, 2023. "On ChatGPT and beyond: How generative artificial intelligence may affect research, teaching, and practice," International Journal of Research in Marketing, Elsevier, vol. 40(2), pages 269-275.
    7. Fabio Motoki & Valdemar Pinho Neto & Victor Rodrigues, 2024. "More human than human: measuring ChatGPT political bias," Public Choice, Springer, vol. 198(1), pages 3-23, January.
    8. von Schenk, Alicia & Klockmann, Victor & Bonnefon, Jean-François & Rahwan, Iyad & Köbis, Nils, 2023. "Lie-detection algorithms attract few users but vastly increase accusation rates," TSE Working Papers 23-1448, Toulouse School of Economics (TSE).
    9. Feng, Jianghong & Ning, Yu & Wang, Zhaohua & Li, Guo & Xiu Xu, Su, 2024. "ChatGPT-enabled two-stage auctions for electric vehicle battery recycling," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 183(C).
    10. Lechardoy, Lucie & López Forés, Laura & Codagnone, Cristiano, 2023. "Artificial intelligence at the workplace and the impacts on work organisation, working conditions and ethics," 32nd European Regional ITS Conference, Madrid 2023: Realising the digital decade in the European Union – Easier said than done? 277997, International Telecommunications Society (ITS).
    11. Yanwei You & Yuquan Chen & Yujun You & Qi Zhang & Qiang Cao, 2023. "Evolutionary Game Analysis of Artificial Intelligence Such as the Generative Pre-Trained Transformer in Future Education," Sustainability, MDPI, vol. 15(12), pages 1-12, June.
    12. Christopher J. Lynch & Erik J. Jensen & Virginia Zamponi & Kevin O’Brien & Erika Frydenlund & Ross Gore, 2023. "A Structured Narrative Prompt for Prompting Narratives from Large Language Models: Sentiment Assessment of ChatGPT-Generated Narratives and Real Tweets," Future Internet, MDPI, vol. 15(12), pages 1-36, November.
    13. Ma, Xiaoyue & Huo, Yudi, 2023. "Are users willing to embrace ChatGPT? Exploring the factors on the acceptance of chatbots from the perspective of AIDUA framework," Technology in Society, Elsevier, vol. 75(C).
    14. Giordano, Vito & Spada, Irene & Chiarello, Filippo & Fantoni, Gualtiero, 2024. "The impact of ChatGPT on human skills: A quantitative study on twitter data," Technological Forecasting and Social Change, Elsevier, vol. 203(C).
    15. Ghio, Alessandro, 2024. "Democratizing academic research with Artificial Intelligence: The misleading case of language," CRITICAL PERSPECTIVES ON ACCOUNTING, Elsevier, vol. 98(C).
    16. Werner, Tobias, 2021. "Algorithmic and human collusion," DICE Discussion Papers 372, Heinrich Heine University Düsseldorf, Düsseldorf Institute for Competition Economics (DICE).
    17. Alin ZAMFIROIU & Denisa VASILE & Daniel SAVU, 2023. "ChatGPT – A Systematic Review of Published Research Papers," Informatica Economica, Academy of Economic Studies - Bucharest, Romania, vol. 27(1), pages 5-16.
    18. Alicia von Schenk & Victor Klockmann & Jean-Franc{c}ois Bonnefon & Iyad Rahwan & Nils Kobis, 2022. "Lie detection algorithms attract few users but vastly increase accusation rates," Papers 2212.04277, arXiv.org.
    19. Roberto Araya, 2023. "Connecting Classrooms with Online Interclass Tournaments: A Strategy to Imitate, Recombine and Innovate Teaching Practices," Sustainability, MDPI, vol. 15(10), pages 1-25, May.
    20. Chugunova, Marina & Sele, Daniela, 2022. "We and It: An interdisciplinary review of the experimental evidence on how humans interact with machines," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 99(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:jcsosc:v:7:y:2024:i:1:d:10.1007_s42001-024-00250-1. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.