IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v13y2025i6p935-d1610187.html
   My bibliography  Save this article

Graph-to-Text Generation with Bidirectional Dual Cross-Attention and Concatenation

Author

Listed:
  • Elias Lemuye Jimale

    (School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
    College of Electrical Engineering and Computing, Adama Science and Technology University, Adama 1888, Ethiopia)

  • Wenyu Chen

    (School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China)

  • Mugahed A. Al-antari

    (Department of Artificial Intelligence and Data Science, College of AI Convergence, Sejong University, Seoul 05006, Republic of Korea)

  • Yeong Hyeon Gu

    (Department of Artificial Intelligence and Data Science, College of AI Convergence, Sejong University, Seoul 05006, Republic of Korea)

  • Victor Kwaku Agbesi

    (School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China)

  • Wasif Feroze

    (School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China)

  • Feidu Akmel

    (School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China)

  • Juhar Mohammed Assefa

    (School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China)

  • Ali Shahzad

    (School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China)

Abstract

Graph-to-text generation (G2T) involves converting structured graph data into natural language text, a task made challenging by the need for encoders to capture the entities and their relationships within the graph effectively. While transformer-based encoders have advanced natural language processing, their reliance on linearized data often obscures the complex interrelationships in graph structures, leading to structural loss. Conversely, graph attention networks excel at capturing graph structures but lack the pre-training advantages of transformers. To leverage the strengths of both modalities and bridge this gap, we propose a novel bidirectional dual cross-attention and concatenation (BDCC) mechanism that integrates outputs from a transformer-based encoder and a graph attention encoder. The bidirectional dual cross-attention computes attention scores bidirectionally, allowing graph features to attend to transformer features and vice versa, effectively capturing inter-modal relationships. The concatenation is applied to fuse the attended outputs, enabling robust feature fusion across modalities. We empirically validate BDCC on PathQuestions and WebNLG benchmark datasets, achieving BLEU scores of 67.41% and 66.58% and METEOR scores of 49.63% and 47.44%, respectively. The results outperform the baseline models and demonstrate that BDCC significantly improves G2T tasks by leveraging the synergistic benefits of graph attention and transformer encoders, addressing the limitations of existing approaches and showcasing the potential for future research in this area.

Suggested Citation

  • Elias Lemuye Jimale & Wenyu Chen & Mugahed A. Al-antari & Yeong Hyeon Gu & Victor Kwaku Agbesi & Wasif Feroze & Feidu Akmel & Juhar Mohammed Assefa & Ali Shahzad, 2025. "Graph-to-Text Generation with Bidirectional Dual Cross-Attention and Concatenation," Mathematics, MDPI, vol. 13(6), pages 1-21, March.
  • Handle: RePEc:gam:jmathe:v:13:y:2025:i:6:p:935-:d:1610187
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/13/6/935/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/13/6/935/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:13:y:2025:i:6:p:935-:d:1610187. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.