Author
Listed:
- Haihua Chen
(University of North Texas)
- Huyen Nguyen
(University of North Texas)
- Asmaa Alghamdi
(University of North Texas)
Abstract
Research contributions, which indicate how a research paper contributes new knowledge or new understanding in contrast to prior research on the topic, are the most valuable type of information for researchers to understand the main content of a paper. However, there is little research using research contributions to identify and recommend valuable knowledge in academic literature for users. Instead, most existing studies mainly focus on the analysis of other elements in academic literature, such as keywords, citations, rhetorical structure, discourse, and others. This paper first introduces a fine-grained annotation scheme with six categories for research contributions in academic literature. To evaluate the reliability of our annotation scheme, we conduct annotation on 5024 sentences collected from Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL Anthology) and an academic journal Information Processing & Management (IP &M). We reach an inter-annotator agreement of Cohen’s kappa = 0.91 and Fleiss’ kappa = 0.91, demonstrating the high quality of the dataset. We then built two types of classifiers for automated research contribution identification based on the dataset: classic feature-based machine learning (ML) and transformer-based deep learning (DL). Our experimental results show that SCI-BERT, a pretrained language model for scientific text, achieves the best performance with an F1 score of 0.58, improving the best classic ML model (nouns + verbs + tf-idf + random forest) by 2%. This also indicates a comparable power of classic feature-based ML models to DL-based model like SCI-BERT on this dataset. The fine-grained annotation scheme can be applied for large-scale analysis for research contributions in academic literature. The automated research contribution classifiers built in this paper provide the basis for the automatic research contributions extraction and knowledge fragment recommendation. The high-quality research contribution dataset developed in this research is publicly available on Zenodo https://zenodo.org/record/6284137#.YhkZ7-iZO4Q . The code for the data analysis and experiments will be released at: https://github.com/HuyenNguyenHelen/Contribution-Sentence-Classification .
Suggested Citation
Haihua Chen & Huyen Nguyen & Asmaa Alghamdi, 2022.
"Constructing a high-quality dataset for automated creation of summaries of fundamental contributions of research articles,"
Scientometrics, Springer;Akadémiai Kiadó, vol. 127(12), pages 7061-7075, December.
Handle:
RePEc:spr:scient:v:127:y:2022:i:12:d:10.1007_s11192-022-04380-z
DOI: 10.1007/s11192-022-04380-z
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:scient:v:127:y:2022:i:12:d:10.1007_s11192-022-04380-z. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.