Author
Listed:
- Xiao Xie
- Xiran Zhou
- Jingzhong Li
- Weijiang Dai
Abstract
Although previous works have proposed sophisticatedly probabilistic models that has strong capability of extracting features from remote sensing data (e.g., convolutional neural networks, CNN), the efforts that focus on exploring the human’s semantics on the object to be recognized are required more explorations. Moreover, interpretability of feature extraction becomes a major disadvantage of the state-of-the-art CNN. Especially for the complex urban objects, which varies in geometrical shapes, functional structures, environmental contexts, etc, due to the heterogeneity between low-level data features and high-level semantics, the features derived from remote sensing data alone are limited to facilitate an accurate recognition. In this paper, we present an ontology-based methodology framework for enabling object recognition through rules extracted from the high-level semantics, rather than unexplainable features extracted from a CNN. Firstly, we semantically organize the descriptions and definitions of the object as semantics (RDF-triple rules) through our developed domain ontology. Secondly, we exploit semantic web rule language to propose an encoder model for decomposing the RDF-triple rules based on a multilayer strategy. Then, we map the low-level data features, which are defined from optical satellite image and LiDAR height, to the decomposed parts of RDF-triple rules. Eventually, we apply a probabilistic belief network (PBN) to probabilistically represent the relationships between low-level data features and high-level semantics, as well as a modified TanH function is used to optimize the recognition result. The experimental results on lacking of the training process based on data samples show that our proposed approach can reach an accurate recognition with high-level semantics. This work is conducive to the development of complex urban object recognition toward the fields including multilayer learning algorithms and knowledge graph-based relational reinforcement learning.
Suggested Citation
Xiao Xie & Xiran Zhou & Jingzhong Li & Weijiang Dai, 2020.
"An Ontology-Based Framework for Complex Urban Object Recognition through Integrating Visual Features and Interpretable Semantics,"
Complexity, Hindawi, vol. 2020, pages 1-15, September.
Handle:
RePEc:hin:complx:5125891
DOI: 10.1155/2020/5125891
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:hin:complx:5125891. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Mohamed Abdelhakeem (email available below). General contact details of provider: https://www.hindawi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.