Author
Listed:
- Jifeng Dong
(College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan 030024, China
Shan Xi Energy Internet Research Institute, Taiyuan 030000, China)
- Yu Zhou
(College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan 030024, China
Shan Xi Energy Internet Research Institute, Taiyuan 030000, China)
- Shufeng Hao
(College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan 030024, China
Shan Xi Energy Internet Research Institute, Taiyuan 030000, China)
- Ding Feng
(College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan 030024, China
Shan Xi Energy Internet Research Institute, Taiyuan 030000, China
College of Computer Science and Technology, Taiyuan Normal University, Taiyuan 030619, China)
- Haixia Zheng
(College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan 030024, China)
- Zhenhuan Xu
(College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan 030024, China)
Abstract
Graph contrastive learning has demonstrated significant superiority for collaborative filtering. These methods typically use augmentation technology to generate contrastive views, and then train graph neural networks with contrastive learning as an auxiliary task. Although these methods are very effective, they do not consider using contrastive learning from the perspective of user–item interaction. As a result, they do not fully leverage the potential of contrastive learning. It is well known that contrastive learning can maximize the consistency of positive pairs and minimize the agreement of negative pairs. Collaborative filtering expects high consistency between users and the items they like and low consistency between users and the items they dislike. If we treat the items that users like as positive examples and the items they dislike as negative examples, contrastive learning can work very well with the goal of collaborative filtering. Based on the above understanding, we propose a new objective function called DCL loss, which improves graph collaborative filtering from the perspective of user–item interaction when Directly using Contrastive Learning. Extensive experiments have shown that when a model adopts DCL loss as its objective function, both its recommendation performance and training efficiency exhibit significant improvements.
Suggested Citation
Jifeng Dong & Yu Zhou & Shufeng Hao & Ding Feng & Haixia Zheng & Zhenhuan Xu, 2024.
"Improving Graph Collaborative Filtering from the Perspective of User–Item Interaction Directly Using Contrastive Learning,"
Mathematics, MDPI, vol. 12(13), pages 1-22, June.
Handle:
RePEc:gam:jmathe:v:12:y:2024:i:13:p:2057-:d:1426605
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:12:y:2024:i:13:p:2057-:d:1426605. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.