IDEAS home Printed from https://ideas.repec.org/a/spr/jcsosc/v7y2024i1d10.1007_s42001-024-00248-9.html
   My bibliography  Save this article

A survey of explainable AI techniques for detection of fake news and hate speech on social media platforms

Author

Listed:
  • Vaishali U. Gongane

    (SCTR’s Pune Institute of Computer Technology, SPPU
    Pune Vidyarthi Griha’s College of Engineering and Technology & G K Pate (Wani) Institute of Management, SPPU)

  • Mousami V. Munot

    (SCTR’s Pune Institute of Computer Technology, SPPU)

  • Alwin D. Anuse

    (Dr Vishwanath Karad MIT-WPU)

Abstract

Artificial intelligence (AI) is a computing field that has played a pivotal role in delivering technological revolutions in various sectors like business, healthcare, finance, social networking, entertainment, and news. With an inimitable ability of AI to process and analyze any form of data (image, text, audio, and video) with the help of high-power computing machines, it is considered as an integral part of Industry 4.0. Social media and internet are another form of technology advancement in digital communication that has created a tremendous impact in the society. Social networking sites like Facebook, Twitter, YouTube, and Instagram provide a platform for people to freely express their thoughts and views. The past decade is witnessing an awful side of social media through the dissemination of online fake news and hate speech content. Social networking sites make use of AI tools to tackle with the increasing hate speech and fake news content. Natural language processing (NLP), a field of AI, include techniques that process vast amount of online content accompanied with machine learning (ML) and deep learning (DL) algorithms that learn the representations of data for detection, classification, and prediction tasks. AI algorithms are considered as “black box” where the decisions made by the algorithms are sometimes biased and lack in transparency. Many state-of-art AI algorithms show low recall and low F1-score metric for diverse forms of hate speech and fake news. The inadequacy of explanations about the decisions made by AI for classification and prediction task is a crucial challenge that needs to be considered. Explainable AI (XAI) is an upcoming research field that has added a new dimension to AI which is “Explainability”. XAI shows a unique ability of interpreting and explaining the decisions made by ML models. This feature of XAI is deployed in various applications like autonomous vehicles and medical diagnostics. In context of social media content, XAI plays an important role in interpreting the diverse forms of hate speech and fake news. Literature studies have reported various XAI models like SHAP (SHapley Additive exPlanations) and Local Interpretable Model-agnostic Explanations (LIME) for detection of hate speech and fake news content. The paper aims to explore XAI models for detection and classification of hate speech and fake news on social media platforms as reported in the research literature. This paper provides a review of evaluation metrics that quantify the XAI technique used in hate speech and fake news detection. The paper leaps into the technical and ethical challenges involved when using XAI models to handle the nuance of online text published on social media platforms.

Suggested Citation

  • Vaishali U. Gongane & Mousami V. Munot & Alwin D. Anuse, 2024. "A survey of explainable AI techniques for detection of fake news and hate speech on social media platforms," Journal of Computational Social Science, Springer, vol. 7(1), pages 587-623, April.
  • Handle: RePEc:spr:jcsosc:v:7:y:2024:i:1:d:10.1007_s42001-024-00248-9
    DOI: 10.1007/s42001-024-00248-9
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s42001-024-00248-9
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s42001-024-00248-9?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. James Zou & Londa Schiebinger, 2018. "AI can be sexist and racist — it’s time to make it fair," Nature, Nature, vol. 559(7714), pages 324-326, July.
    2. Sebastian Bach & Alexander Binder & Grégoire Montavon & Frederick Klauschen & Klaus-Robert Müller & Wojciech Samek, 2015. "On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation," PLOS ONE, Public Library of Science, vol. 10(7), pages 1-46, July.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Stefan Feuerriegel & Mateusz Dolata & Gerhard Schwabe, 2020. "Fair AI," Business & Information Systems Engineering: The International Journal of WIRTSCHAFTSINFORMATIK, Springer;Gesellschaft für Informatik e.V. (GI), vol. 62(4), pages 379-384, August.
    2. Feras A. Batarseh & Munisamy Gopinath & Anderson Monken, 2020. "Artificial Intelligence Methods for Evaluating Global Trade Flows," International Finance Discussion Papers 1296, Board of Governors of the Federal Reserve System (U.S.).
    3. Michael A. Flynn & Pietra Check & Andrea L. Steege & Jacqueline M. Sivén & Laura N. Syron, 2021. "Health Equity and a Paradigm Shift in Occupational Safety and Health," IJERPH, MDPI, vol. 19(1), pages 1-13, December.
    4. Kevin Fauvel & Tao Lin & Véronique Masson & Élisa Fromont & Alexandre Termier, 2021. "XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification," Mathematics, MDPI, vol. 9(23), pages 1-19, December.
    5. Neukam, Marion & Bollinger, Sophie, 2022. "Encouraging creative teams to integrate a sustainable approach to technology," Journal of Business Research, Elsevier, vol. 150(C), pages 354-364.
    6. Damiano Brigo & Xiaoshan Huang & Andrea Pallavicini & Haitz Saez de Ocariz Borde, 2021. "Interpretability in deep learning for finance: a case study for the Heston model," Papers 2104.09476, arXiv.org.
    7. Charlene H. Chu & Simon Donato-Woodger & Shehroz S. Khan & Rune Nyrup & Kathleen Leslie & Alexandra Lyn & Tianyu Shi & Andria Bianchi & Samira Abbasgholizadeh Rahimi & Amanda Grenier, 2023. "Age-related bias and artificial intelligence: a scoping review," Palgrave Communications, Palgrave Macmillan, vol. 10(1), pages 1-17, December.
    8. Parmar, Janak & Das, Pritikana & Dave, Sanjaykumar M., 2021. "A machine learning approach for modelling parking duration in urban land-use," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 572(C).
    9. Chien-Wei Chuang & Ariana Chang & Mingchih Chen & Maria John P. Selvamani & Ben-Chang Shia, 2022. "A Worldwide Bibliometric Analysis of Publications on Artificial Intelligence and Ethics in the Past Seven Decades," Sustainability, MDPI, vol. 14(18), pages 1-13, September.
    10. Pelin Ayranci & Phung Lai & Nhathai Phan & Han Hu & Alexander Kolinowski & David Newman & Deijing Dou, 2022. "OnML: an ontology-based approach for interpretable machine learning," Journal of Combinatorial Optimization, Springer, vol. 44(1), pages 770-793, August.
    11. Sherwan Mohammed Najm & Imre Paniti, 2023. "Investigation and machine learning-based prediction of parametric effects of single point incremental forming on pillow effect and wall profile of AlMn1Mg1 aluminum alloy sheets," Journal of Intelligent Manufacturing, Springer, vol. 34(1), pages 331-367, January.
    12. Haraguchi, Masahiko & Funahashi, Tomomi & Biljecki, Filip, 2024. "Assessing governance implications of city digital twin technology: A maturity model approach," Technological Forecasting and Social Change, Elsevier, vol. 204(C).
    13. Davazdahemami, Behrooz & Kalgotra, Pankush & Zolbanin, Hamed M. & Delen, Dursun, 2023. "A developer-oriented recommender model for the app store: A predictive network analytics approach," Journal of Business Research, Elsevier, vol. 158(C).
    14. S. Van Cranenburgh & S. Wang & A. Vij & F. Pereira & J. Walker, 2021. "Choice modelling in the age of machine learning -- discussion paper," Papers 2101.11948, arXiv.org, revised Nov 2021.
    15. Andrew Sudmant & Vincent Viguié & Quentin Lepetit & Lucy Oates & Abhijit Datey & Andy Gouldson & David Watling, 2021. "Fair weather forecasting? The shortcomings of big data for sustainable development, a case study from Hubballi‐Dharwad, India," Sustainable Development, John Wiley & Sons, Ltd., vol. 29(6), pages 1237-1248, November.
    16. Kunal Pattanayak & Vikram Krishnamurthy, 2021. "Rationally Inattentive Utility Maximization for Interpretable Deep Image Classification," Papers 2102.04594, arXiv.org, revised Jul 2021.
    17. Gabriel Ferrettini & Elodie Escriva & Julien Aligon & Jean-Baptiste Excoffier & Chantal Soulé-Dupuy, 2022. "Coalitional Strategies for Efficient Individual Prediction Explanation," Information Systems Frontiers, Springer, vol. 24(1), pages 49-75, February.
    18. Minyoung Lee & Joohyoung Jeon & Hongchul Lee, 2022. "Explainable AI for domain experts: a post Hoc analysis of deep learning for defect classification of TFT–LCD panels," Journal of Intelligent Manufacturing, Springer, vol. 33(6), pages 1747-1759, August.
    19. Mark Gromowski & Michael Siebers & Ute Schmid, 2020. "A process framework for inducing and explaining Datalog theories," Advances in Data Analysis and Classification, Springer;German Classification Society - Gesellschaft für Klassifikation (GfKl);Japanese Classification Society (JCS);Classification and Data Analysis Group of the Italian Statistical Society (CLADAG);International Federation of Classification Societies (IFCS), vol. 14(4), pages 821-835, December.
    20. Chuhan Wu & Fangzhao Wu & Tao Qi & Wei-Qiang Zhang & Xing Xie & Yongfeng Huang, 2022. "Removing AI’s sentiment manipulation of personalized news delivery," Palgrave Communications, Palgrave Macmillan, vol. 9(1), pages 1-9, December.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:jcsosc:v:7:y:2024:i:1:d:10.1007_s42001-024-00248-9. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.