IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v12y2024i17p2659-d1465173.html
   My bibliography  Save this article

Graph Information Vanishing Phenomenon in Implicit Graph Neural Networks

Author

Listed:
  • Silu He

    (School of Geosciences and Info-Physics, Central South University, Changsha 410083, China)

  • Jun Cao

    (School of Geosciences and Info-Physics, Central South University, Changsha 410083, China)

  • Hongyuan Yuan

    (School of Geosciences and Info-Physics, Central South University, Changsha 410083, China)

  • Zhe Chen

    (School of Geosciences and Info-Physics, Central South University, Changsha 410083, China)

  • Shijuan Gao

    (School of Geosciences and Info-Physics, Central South University, Changsha 410083, China
    Information & Network Center, Central South University, Changsha 410083, China)

  • Haifeng Li

    (School of Geosciences and Info-Physics, Central South University, Changsha 410083, China)

Abstract

Graph neural networks (GNNs) have been highly successful in graph representation learning. The goal of GNNs is to enrich node representations by aggregating information from neighboring nodes. Much work has attempted to improve the quality of aggregation by introducing a variety of graph information with representational capabilities. The class of GNNs that improves the quality of aggregation by encoding graph information with representational capabilities into the weights of neighboring nodes through different learnable transformation structures (LTSs) are referred to as implicit GNNs. However, we argue that LTSs only transform graph information into the weights of neighboring nodes in the direction that minimizes the loss function during the learning process and does not actually utilize the effective properties of graph information, a phenomenon that we refer to as graph information vanishing (GIV). To validate this point, we perform thousands of experiments on seven node classification benchmark datasets. We first replace the graph information utilized by five implicit GNNs with random values and surprisingly observe that the variation range of accuracies is less than ± 0.3%. Then, we quantitatively characterize the similarity of the weights generated from graph information and random values by cosine similarity, and the cosine similarities are greater than 0.99. The empirical experiments show that graph information is equivalent to initializing the input of LTSs. We believe that graph information as an additional supervised signal to constrain the training of GNNs can effectively solve GIV. Here, we propose GinfoNN, which utilizes both labels and discrete graph curvature as supervised signals to jointly constrain the training of the model. The experimental results show that the classification accuracies of GinfoNN improve by two percentage points over baselines on large and dense datasets.

Suggested Citation

  • Silu He & Jun Cao & Hongyuan Yuan & Zhe Chen & Shijuan Gao & Haifeng Li, 2024. "Graph Information Vanishing Phenomenon in Implicit Graph Neural Networks," Mathematics, MDPI, vol. 12(17), pages 1-19, August.
  • Handle: RePEc:gam:jmathe:v:12:y:2024:i:17:p:2659-:d:1465173
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/12/17/2659/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/12/17/2659/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:12:y:2024:i:17:p:2659-:d:1465173. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.