Author
Listed:
- Qiudan Li
(State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China)
- David Jingjun Xu
(Department of Information Systems, College of Business, City University of Hong Kong, Hong Kong, China)
- Haoda Qian
(State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; and School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China)
- Linzi Wang
(State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; and School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China)
- Minjie Yuan
(State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; and School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China)
- Daniel Dajun Zeng
(State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; and School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China)
Abstract
Sarcastic remarks often appear in social media and e-commerce platforms to express almost exclusively negative emotions and opinions on certain instances, such as dissatisfaction with a purchased product or service. Thus, the detection of sarcasm allows merchants to timely resolve users’ complaints. However, detecting sarcastic remarks is difficult because of its common form of using counterfactual statements. The few studies that are dedicated to detecting sarcasm largely ignore what sparks these sarcastic remarks, which could be because of an empty promise of a merchant’s product description. This study formulates a novel problem of sarcasm cause detection that leverages domain information, dialogue context information, and sarcasm sentences by proposing a pretrained language model-based approach equipped with a novel hybrid multihead fusion-attention mechanism that combines self-attention, target-attention, and a feed-forward neural network. The domain information and the dialogue context information are then interactively fused to obtain the domain-specific dialogue context representation, and bidirectionally enhanced sarcasm-cause pair representations are generated for detecting sarcasm spark. Experimental results on real-world data sets demonstrate the efficacy of the proposed model. The findings of this study contribute to the literature on sarcasm cause detection and provide business value to relevant stakeholders and consumers.
Suggested Citation
Qiudan Li & David Jingjun Xu & Haoda Qian & Linzi Wang & Minjie Yuan & Daniel Dajun Zeng, 2025.
"A Fusion Pretrained Approach for Identifying the Cause of Sarcasm Remarks,"
INFORMS Journal on Computing, INFORMS, vol. 37(2), pages 465-479, March.
Handle:
RePEc:inm:orijoc:v:37:y:2025:i:2:p:465-479
DOI: 10.1287/ijoc.2022.0285
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:orijoc:v:37:y:2025:i:2:p:465-479. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.