IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v10y2022i17p3161-d905245.html
   My bibliography  Save this article

CIST: Differentiating Concepts and Instances Based on Spatial Transformation for Knowledge Graph Embedding

Author

Listed:
  • Pengfei Zhang

    (Science and Technology on Information Systems Engineering Laboratory, National University of Defense Technology, Changsha 410073, China)

  • Dong Chen

    (Science and Technology on Information Systems Engineering Laboratory, National University of Defense Technology, Changsha 410073, China)

  • Yang Fang

    (Science and Technology on Information Systems Engineering Laboratory, National University of Defense Technology, Changsha 410073, China)

  • Xiang Zhao

    (Laboratory for Big Data and Decision, National University of Defense Technology, Changsha 410073, China)

  • Weidong Xiao

    (Science and Technology on Information Systems Engineering Laboratory, National University of Defense Technology, Changsha 410073, China)

Abstract

Knowledge representation learning is representing entities and relations in a knowledge graph as dense low-dimensional vectors in the continuous space, which explores the features and properties of the graph. Such a technique can facilitate the computation and reasoning on the knowledge graphs, which benefits many downstream tasks. In order to alleviate the problem of insufficient entity representation learning caused by sparse knowledge graphs, some researchers propose knowledge graph embedding models based on instances and concepts, which utilize the latent semantic connections between concepts and instances contained in the knowledge graphs to enhance the knowledge graph embedding. However, they model instances and concepts in the same space or ignore the transitivity of isA relations, leading to inaccurate embeddings of concepts and instances. To address the above shortcomings, we propose a knowledge graph embedding model that differentiates concepts and instances based on spatial transformation—CIST. The model alleviates the gathering issue of similar instances or concepts in the semantic space by modeling them in different embedding spaces, and adds a learnable parameter to adjust the neighboring range for concept embedding to distinguish hierarchical information of different concepts, thus modeling the transitivity of isA relations. The above features of instances and concepts serve as auxiliary information so that thoroughly modeling them could alleviate the insufficient entity representation learning issue. For the experiments, we chose two tasks, i.e., link prediction and triple classification, and two real-life datasets: YAGO26K-906 and DB111K-174. Compared with state of the arts, CIST achieves an optimal performance in most cases. Specifically, CIST outperforms the SOTA model JOIE by 51.1% on Hits@1 in link prediction and 15.2% on F1 score in triple classification.

Suggested Citation

  • Pengfei Zhang & Dong Chen & Yang Fang & Xiang Zhao & Weidong Xiao, 2022. "CIST: Differentiating Concepts and Instances Based on Spatial Transformation for Knowledge Graph Embedding," Mathematics, MDPI, vol. 10(17), pages 1-16, September.
  • Handle: RePEc:gam:jmathe:v:10:y:2022:i:17:p:3161-:d:905245
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/10/17/3161/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/10/17/3161/
    Download Restriction: no
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Xuechen Zhao & Jinfeng Miao & Fuqiang Yang & Shengnan Pang, 2024. "Geometry Interaction Embeddings for Interpolation Temporal Knowledge Graph Completion," Mathematics, MDPI, vol. 12(13), pages 1-15, June.
    2. Pengfei Zhang & Xiaoxue Zhang & Yang Fang & Jinzhi Liao & Wubin Ma & Zhen Tan & Weidong Xiao, 2024. "Knowledge Graph Embedding for Hierarchical Entities Based on Auto-Embedding Size," Mathematics, MDPI, vol. 12(20), pages 1-19, October.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:10:y:2022:i:17:p:3161-:d:905245. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.