IDEAS home Printed from https://ideas.repec.org/a/spr/jcsosc/v7y2024i1d10.1007_s42001-024-00250-1.html
   My bibliography  Save this article

GenAI against humanity: nefarious applications of generative artificial intelligence and large language models

Author

Listed:
  • Emilio Ferrara

    (University of Southern California)

Abstract

Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) are marvels of technology; celebrated for their prowess in natural language processing and multimodal content generation, they promise a transformative future. But as with all powerful tools, they come with their shadows. Picture living in a world where deepfakes are indistinguishable from reality, where synthetic identities orchestrate malicious campaigns, and where targeted misinformation or scams are crafted with unparalleled precision. Welcome to the darker side of GenAI applications. This article is not just a journey through the meanders of potential misuse of GenAI and LLMs, but also a call to recognize the urgency of the challenges ahead. As we navigate the seas of misinformation campaigns, malicious content generation, and the eerie creation of sophisticated malware, we’ll uncover the societal implications that ripple through the GenAI revolution we are witnessing. From AI-powered botnets on social media platforms to the unnerving potential of AI to generate fabricated identities, or alibis made of synthetic realities, the stakes have never been higher. The lines between the virtual and the real worlds are blurring, and the consequences of potential GenAI’s nefarious applications impact us all. This article serves both as a synthesis of rigorous research presented on the risks of GenAI and misuse of LLMs and as a thought-provoking vision of the different types of harmful GenAI applications we might encounter in the near future, and some ways we can prepare for them.

Suggested Citation

  • Emilio Ferrara, 2024. "GenAI against humanity: nefarious applications of generative artificial intelligence and large language models," Journal of Computational Social Science, Springer, vol. 7(1), pages 549-569, April.
  • Handle: RePEc:spr:jcsosc:v:7:y:2024:i:1:d:10.1007_s42001-024-00250-1
    DOI: 10.1007/s42001-024-00250-1
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s42001-024-00250-1
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s42001-024-00250-1?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. María Agustina Ricci Lara & Rodrigo Echeveste & Enzo Ferrante, 2022. "Addressing fairness in artificial intelligence for medical imaging," Nature Communications, Nature, vol. 13(1), pages 1-6, December.
    2. Nils Köbis & Jean-François Bonnefon & Iyad Rahwan, 2021. "Bad machines corrupt good morals," Nature Human Behaviour, Nature, vol. 5(6), pages 679-685, June.
    3. Eva A. M. van Dis & Johan Bollen & Willem Zuidema & Robert van Rooij & Claudi L. Bockting, 2023. "ChatGPT: five priorities for research," Nature, Nature, vol. 614(7947), pages 224-226, February.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Ghio, Alessandro, 2024. "Democratizing academic research with Artificial Intelligence: The misleading case of language," CRITICAL PERSPECTIVES ON ACCOUNTING, Elsevier, vol. 98(C).
    2. Evangelos Katsamakas & Oleg V. Pavlov & Ryan Saklad, 2024. "Artificial intelligence and the transformation of higher education institutions," Papers 2402.08143, arXiv.org.
    3. Werner, Tobias, 2021. "Algorithmic and human collusion," DICE Discussion Papers 372, Heinrich Heine University Düsseldorf, Düsseldorf Institute for Competition Economics (DICE).
    4. Alin ZAMFIROIU & Denisa VASILE & Daniel SAVU, 2023. "ChatGPT – A Systematic Review of Published Research Papers," Informatica Economica, Academy of Economic Studies - Bucharest, Romania, vol. 27(1), pages 5-16.
    5. Elias Fernández Domingos & Inês Terrucha & Rémi Suchon & Jelena Grujić & Juan Burguillo & Francisco Santos & Tom Lenaerts, 2022. "Delegation to artificial agents fosters prosocial behaviors in the collective risk dilemma," Post-Print hal-04296038, HAL.
    6. Alicia von Schenk & Victor Klockmann & Jean-Franc{c}ois Bonnefon & Iyad Rahwan & Nils Kobis, 2022. "Lie detection algorithms attract few users but vastly increase accusation rates," Papers 2212.04277, arXiv.org.
    7. Shahad Al-Khalifa & Fatima Alhumaidhi & Hind Alotaibi & Hend S. Al-Khalifa, 2023. "ChatGPT across Arabic Twitter: A Study of Topics, Sentiments, and Sarcasm," Data, MDPI, vol. 8(11), pages 1-19, November.
    8. Roberto Araya, 2023. "Connecting Classrooms with Online Interclass Tournaments: A Strategy to Imitate, Recombine and Innovate Teaching Practices," Sustainability, MDPI, vol. 15(10), pages 1-25, May.
    9. Chugunova, Marina & Sele, Daniela, 2022. "We and It: An interdisciplinary review of the experimental evidence on how humans interact with machines," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 99(C).
    10. Ching-Sheng Lin & Chung-Nan Tsai & Shao-Tang Su & Jung-Sing Jwo & Cheng-Hsiung Lee & Xin Wang, 2023. "Predictive Prompts with Joint Training of Large Language Models for Explainable Recommendation," Mathematics, MDPI, vol. 11(20), pages 1-12, October.
    11. Nuortimo, Kalle & Harkonen, Janne & Breznik, Kristijan, 2024. "Global, regional, and local acceptance of solar power," Renewable and Sustainable Energy Reviews, Elsevier, vol. 193(C).
    12. Lukas Lanz & Roman Briker & Fabiola H. Gerpott, 2024. "Employees Adhere More to Unethical Instructions from Human Than AI Supervisors: Complementing Experimental Evidence with Machine Learning," Journal of Business Ethics, Springer, vol. 189(3), pages 625-646, January.
    13. Merve Tunali & Hyunjoo Hong & Luis Mauricio Ortiz-Galvez & Jimeng Wu & Yiwen Zhang & David Mennekes & Barbora Pinlova & Danyang Jiang & Claudia Som & Bernd Nowack, 2023. "Conversational AI Tools for Environmental Topics: A Comparative Analysis of Different Tools and Languages for Microplastics, Tire Wear Particles, Engineered Nanoparticles and Advanced Materials," Sustainability, MDPI, vol. 15(19), pages 1-16, October.
    14. Ali, Omar & Murray, Peter A. & Momin, Mujtaba & Dwivedi, Yogesh K. & Malik, Tegwen, 2024. "The effects of artificial intelligence applications in educational settings: Challenges and strategies," Technological Forecasting and Social Change, Elsevier, vol. 199(C).
    15. Peres, Renana & Schreier, Martin & Schweidel, David & Sorescu, Alina, 2023. "On ChatGPT and beyond: How generative artificial intelligence may affect research, teaching, and practice," International Journal of Research in Marketing, Elsevier, vol. 40(2), pages 269-275.
    16. Evangelos Katsamakas & Oleg V. Pavlov & Ryan Saklad, 2024. "Artificial Intelligence and the Transformation of Higher Education Institutions: A Systems Approach," Sustainability, MDPI, vol. 16(14), pages 1-22, July.
    17. Fabio Motoki & Valdemar Pinho Neto & Victor Rodrigues, 2024. "More human than human: measuring ChatGPT political bias," Public Choice, Springer, vol. 198(1), pages 3-23, January.
    18. Herhausen, Dennis & Bernritter, Stefan F. & Ngai, Eric W.T. & Kumar, Ajay & Delen, Dursun, 2024. "Machine learning in marketing: Recent progress and future research directions," Journal of Business Research, Elsevier, vol. 170(C).
    19. Köbis, Nils & Rahwan, Zoe & Bersch, Clara & Ajaj, Tamer & Bonnefon, Jean-François & Rahwan, Iyad, 2024. "Experimental evidence that delegating to intelligent machines can increase dishonest behaviour," OSF Preprints dnjgz, Center for Open Science.
    20. von Schenk, Alicia & Klockmann, Victor & Bonnefon, Jean-François & Rahwan, Iyad & Köbis, Nils, 2023. "Lie-detection algorithms attract few users but vastly increase accusation rates," IAST Working Papers 23-155, Institute for Advanced Study in Toulouse (IAST).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:jcsosc:v:7:y:2024:i:1:d:10.1007_s42001-024-00250-1. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.