Author
Listed:
- Li-Xin Liang
(College of Big Data and Internet, Shenzhen Technology University, Shenzhen 518118, China)
- Lin Lin
(College of Big Data and Internet, Shenzhen Technology University, Shenzhen 518118, China)
- E Lin
(School of Computer Science and Engineering, Sun Yat-Sen University, Guangzhou 510006, China)
- Wu-Shao Wen
(School of Computer Science and Engineering, Sun Yat-Sen University, Guangzhou 510006, China)
- Guo-Yan Huang
(School of Computer Science and Engineering, Sun Yat-Sen University, Guangzhou 510006, China)
Abstract
Extracting structured information from massive and heterogeneous text is a hot research topic in the field of natural language processing. It includes two key technologies: named entity recognition (NER) and relation extraction (RE). However, previous NER models consider less about the influence of mutual attention between words in the text on the prediction of entity labels, and there is less research on how to more fully extract sentence information for relational classification. In addition, previous research treats NER and RE as a pipeline of two separated tasks, which neglects the connection between them, and is mainly focused on the English corpus. In this paper, based on the self-attention mechanism, bidirectional long short-term memory (BiLSTM) neural network and conditional random field (CRF) model, we put forth a Chinese NER method based on BiLSTM-Self-Attention-CRF and a RE method based on BiLSTM-Multilevel-Attention in the field of Chinese literature. In particular, considering the relationship between these two tasks in terms of word vector and context feature representation in the neural network model, we put forth a joint learning method for NER and RE tasks based on the same underlying module, which jointly updates the parameters of the shared module during the training of these two tasks. For performance evaluation, we make use of the largest Chinese data set containing these two tasks. Experimental results show that the proposed independently trained NER and RE models achieve better performance than all previous methods, and our joint NER-RE training model outperforms the independently-trained NER and RE model.
Suggested Citation
Li-Xin Liang & Lin Lin & E Lin & Wu-Shao Wen & Guo-Yan Huang, 2022.
"A Joint Learning Model to Extract Entities and Relations for Chinese Literature Based on Self-Attention,"
Mathematics, MDPI, vol. 10(13), pages 1-20, June.
Handle:
RePEc:gam:jmathe:v:10:y:2022:i:13:p:2216-:d:847156
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:10:y:2022:i:13:p:2216-:d:847156. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.