IDEAS home Printed from https://ideas.repec.org/a/eee/ejores/v317y2024i2p401-413.html
   My bibliography  Save this article

Feature importance in the age of explainable AI: Case study of detecting fake news & misinformation via a multi-modal framework

Author

Listed:
  • Kumar, Ajay
  • Taylor, James W.

Abstract

In recent years, fake news has become a global phenomenon due to its explosive growth and ability to leverage multimedia content to manipulate user opinions. Fake news is created by manipulating images, text, audio, and videos, particularly on social media, and the proliferation of such disinformation can trigger detrimental societal effects. False forwarded messages can have a devastating impact on society, spreading propaganda, inciting violence, manipulating public opinion, and even influencing elections. A major shortcoming of existing fake news detection methods is their inability to simultaneously learn and extract features from two modalities and train models with shared representations of multimodal (textual and visual) information. Feature engineering is a critical task in the fake news detection model's machine learning (ML) development process. For ML models to be explainable and trusted, feature engineering should describe how many features used in the ML models contribute to making more accurate predictions. Feature engineering, which plays an important role in the development of an explainable AI system by shaping the features used in the ML models, is an interconnected concept with explainable AI as it affects the model's interpretability. In the research, we develop a fake news detector model in which we (1) identify several textual and visual features that are associated with fake or credible news; specifically, we extract features from article titles, contents, and, top images; (2) investigate the role of all multimodal features (content, emotions and manipulation-based) and combine the cumulative effects using the feature engineering that represent the behavior of fake news propagators; and (3) develop a model to detect disinformation on benchmark multimodal datasets consisting of text and images. We conduct experiments on several real-world multimodal fake news datasets, and our results show that on average, our model outperforms existing single-modality methods by large margins that do not use any feature optimization techniques.

Suggested Citation

  • Kumar, Ajay & Taylor, James W., 2024. "Feature importance in the age of explainable AI: Case study of detecting fake news & misinformation via a multi-modal framework," European Journal of Operational Research, Elsevier, vol. 317(2), pages 401-413.
  • Handle: RePEc:eee:ejores:v:317:y:2024:i:2:p:401-413
    DOI: 10.1016/j.ejor.2023.10.003
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0377221723007609
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.ejor.2023.10.003?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Agarwal, Puneet & Aziz, Ridwan Al & Zhuang, Jun, 2022. "Interplay of rumor propagation and clarification on social media during crisis events - A game-theoretic approach," European Journal of Operational Research, Elsevier, vol. 298(2), pages 714-733.
    2. Philipp Borchert & Kristof Coussement & Arno de Caigny & Jochen de Weerdt, 2023. "Extending business failure prediction models with textual website content using deep learning," Post-Print hal-03976762, HAL.
    3. Ajay Kumar & Ram D. Gopal & Ravi Shankar & Kim Hua Tan, 2022. "Fraudulent review detection model focusing on emotional expressions and explicit aspects : investigating the potential of feature engineering," Post-Print hal-03630420, HAL.
    4. Hunt Allcott & Matthew Gentzkow, 2017. "Social Media and Fake News in the 2016 Election," NBER Working Papers 23089, National Bureau of Economic Research, Inc.
    5. Zhang, Chaowei & Gupta, Ashish & Kauten, Christian & Deokar, Amit V. & Qin, Xiao, 2019. "Detecting fake news for reducing misinformation risks using analytics approaches," European Journal of Operational Research, Elsevier, vol. 279(3), pages 1036-1052.
    6. Hunt Allcott & Matthew Gentzkow, 2017. "Social Media and Fake News in the 2016 Election," Journal of Economic Perspectives, American Economic Association, vol. 31(2), pages 211-236, Spring.
    7. Yfanti, Stavroula & Karanasos, Menelaos & Zopounidis, Constantin & Christopoulos, Apostolos, 2023. "Corporate credit risk counter-cyclical interdependence: A systematic analysis of cross-border and cross-sector correlation dynamics," European Journal of Operational Research, Elsevier, vol. 304(2), pages 813-831.
    8. Tanınmış, Kübra & Aras, Necati & Altınel, İ. Kuban, 2022. "Improved x-space algorithm for min-max bilevel problems with an application to misinformation spread in social networks," European Journal of Operational Research, Elsevier, vol. 297(1), pages 40-52.
    9. Yiangos Papanastasiou, 2020. "Fake News Propagation and Detection: A Sequential Model," Management Science, INFORMS, vol. 66(5), pages 1826-1846, May.
    10. Reisach, Ulrike, 2021. "The responsibility of social media in times of societal and political manipulation," European Journal of Operational Research, Elsevier, vol. 291(3), pages 906-917.
    11. Yue Han & Theodoros Lappas & Gaurav Sabnis, 2020. "The Importance of Interactions Between Content Characteristics and Creator Characteristics for Studying Virality in Social Media," Information Systems Research, INFORMS, vol. 31(2), pages 576-588, June.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Pervaiz Akhtar & Arsalan Mujahid Ghouri & Haseeb Ur Rehman Khan & Mirza Amin ul Haq & Usama Awan & Nadia Zahoor & Zaheer Khan & Aniqa Ashraf, 2023. "Detecting fake news and disinformation using artificial intelligence and machine learning to avoid supply chain disruptions," Annals of Operations Research, Springer, vol. 327(2), pages 633-657, August.
    2. Denter, Philipp & Ginzburg, Boris, 2021. "Troll Farms and Voter Disinformation," MPRA Paper 109634, University Library of Munich, Germany.
    3. Tuval Danenberg & Drew Fudenberg, 2024. "Endogenous Attention and the Spread of False News," Papers 2406.11024, arXiv.org.
    4. Samuel S. Santos & Marcelo C. Griebeler, 2022. "Can fact-checkers discipline the government?," Economics Bulletin, AccessEcon, vol. 42(3), pages 1498-1509.
    5. Gonzalo Cisternas & Jorge Vásquez, 2022. "Misinformation in Social Media: The Role of Verification Incentives," Staff Reports 1028, Federal Reserve Bank of New York.
    6. Ozan Candogan & Kimon Drakopoulos, 2020. "Optimal Signaling of Content Accuracy: Engagement vs. Misinformation," Operations Research, INFORMS, vol. 68(2), pages 497-515, March.
    7. Laura Studen & Victor Tiberius, 2020. "Social Media, Quo Vadis? Prospective Development and Implications," Future Internet, MDPI, vol. 12(9), pages 1-22, August.
    8. Mohamed Mostagir & James Siderius, 2022. "Learning in a Post-Truth World," Management Science, INFORMS, vol. 68(4), pages 2860-2868, April.
    9. Petratos, Pythagoras N., 2021. "Misinformation, disinformation, and fake news: Cyber risks to business," Business Horizons, Elsevier, vol. 64(6), pages 763-774.
    10. Kris Hartley & Minh Khuong Vu, 2020. "Fighting fake news in the COVID-19 era: policy insights from an equilibrium model," Policy Sciences, Springer;Society of Policy Sciences, vol. 53(4), pages 735-758, December.
    11. Charlson, G., 2022. "In platforms we trust: misinformation on social networks in the presence of social mistrust," Janeway Institute Working Papers 2202, Faculty of Economics, University of Cambridge.
    12. Ka Chung Ng & Ping Fan Ke & Mike K. P. So & Kar Yan Tam, 2023. "Augmenting fake content detection in online platforms: A domain adaptive transfer learning via adversarial training approach," Production and Operations Management, Production and Operations Management Society, vol. 32(7), pages 2101-2122, July.
    13. Divinus Oppong-Tawiah & Jane Webster, 2023. "Corporate Sustainability Communication as ‘Fake News’: Firms’ Greenwashing on Twitter," Sustainability, MDPI, vol. 15(8), pages 1-26, April.
    14. Julia Cage & Nicolas Hervé & Marie-Luce Viaud, 2017. "The Production of Information in an Online World: Is Copy Right?," Working Papers hal-03393171, HAL.
    15. Leopoldo Fergusson & Carlos Molina, 2020. "Facebook Causes Protests," HiCN Working Papers 323, Households in Conflict Network.
    16. Jyoti Prakash Singh & Abhinav Kumar & Nripendra P. Rana & Yogesh K. Dwivedi, 2022. "Attention-Based LSTM Network for Rumor Veracity Estimation of Tweets," Information Systems Frontiers, Springer, vol. 24(2), pages 459-474, April.
    17. Tetsuro Kobayashi & Fumiaki Taka & Takahisa Suzuki, 2021. "Can “Googling” correct misbelief? Cognitive and affective consequences of online search," PLOS ONE, Public Library of Science, vol. 16(9), pages 1-16, September.
    18. Manuel Hensmans, 2021. "Exploring the dark and bright sides of Internet democracy: Ethos-reversing and ethos-renewing digital transformation," ULB Institutional Repository 2013/321232, ULB -- Universite Libre de Bruxelles.
    19. Dean Neu & Gregory D. Saxton & Abu S. Rahaman, 2022. "Social Accountability, Ethics, and the Occupy Wall Street Protests," Journal of Business Ethics, Springer, vol. 180(1), pages 17-31, September.
    20. Robbett, Andrea & Matthews, Peter Hans, 2018. "Partisan bias and expressive voting," Journal of Public Economics, Elsevier, vol. 157(C), pages 107-120.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:ejores:v:317:y:2024:i:2:p:401-413. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/eor .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.