IDEAS home Printed from https://ideas.repec.org/a/spr/scient/v129y2024i9d10.1007_s11192-024-05114-z.html
   My bibliography  Save this article

Automated recognition of innovative sentences in academic articles: semi-automatic annotation for cost reduction and SAO reconstruction for enhanced data

Author

Listed:
  • Biao Zhang

    (National Science Library (Chengdu), Chinese Academy of Sciences
    University of Chinese Academy of Sciences)

  • Yunwei Chen

    (National Science Library (Chengdu), Chinese Academy of Sciences
    University of Chinese Academy of Sciences)

Abstract

Research on innovative content within academic articles plays a vital role in exploring the frontiers of scientific and technological innovation while facilitating the integration of scientific and technological evaluation into academic discourse. To efficiently gather the latest innovative concepts, it is essential to accurately recognize innovative sentences within academic articles. Although several supervised methods for classifying article sentences exist, such as citation function sentences, future work sentences, and formal citation sentences, most of these methods rely on manual annotations or rule-based matching to construct datasets, often neglecting an in-depth exploration of model performance enhancement. To address the limitations of existing research in this domain, this study introduces a semi-automatic annotation method for innovative sentences (IS) with the assistance of expert comments information and proposes a data augmentation method by SAO reconstruction to augment the training dataset. Within this paper, we compared and analyzed the effectiveness of multiple algorithms for recognizing IS within academic articles. This study utilized the full text of academic articles as the research subject and employed the semi-automatic method to annotate IS for creating the training dataset. Then, this study validated the effectiveness of the semi-automatic annotation method through manual inspection and compared it with rule-based annotation methods. Additionally, the impacts of different augmentation ratios on model performance were also explored. The empirical results reveal the following: (1) The semi-automatic annotation method proposed in this study achieves an accuracy rate of 0.87239, ensuring the validity of annotated data while reducing the manual annotation cost. (2) The SAO reconstruction for data augmentation method significantly improved the accuracy of machine learning and deep learning algorithms in the recognition of IS. (3) When the augmentation ratio in the training set was set to 50%, the trained GPT-2 model was superior to other algorithms, achieving an ACC of 0.97883 in the test set and an F1 score of 0.95505 in practical application.

Suggested Citation

  • Biao Zhang & Yunwei Chen, 2024. "Automated recognition of innovative sentences in academic articles: semi-automatic annotation for cost reduction and SAO reconstruction for enhanced data," Scientometrics, Springer;Akadémiai Kiadó, vol. 129(9), pages 5403-5432, September.
  • Handle: RePEc:spr:scient:v:129:y:2024:i:9:d:10.1007_s11192-024-05114-z
    DOI: 10.1007/s11192-024-05114-z
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s11192-024-05114-z
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s11192-024-05114-z?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Thomas Heinze & Philip Shapira & Jacqueline Senker & Stefan Kuhlmann, 2007. "Identifying creative research accomplishments: Methodology and results for nanotechnology and human genetics," Scientometrics, Springer;Akadémiai Kiadó, vol. 70(1), pages 125-152, January.
    2. Ludo Waltman & Rodrigo Costas, 2014. "F1000 Recommendations as a Potential New Data Source for Research Evaluation: A Comparison With Citations," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 65(3), pages 433-445, March.
    3. Diana Hicks & Paul Wouters & Ludo Waltman & Sarah de Rijcke & Ismael Rafols, 2015. "Bibliometrics: The Leiden Manifesto for research metrics," Nature, Nature, vol. 520(7548), pages 429-431, April.
    4. Zhang, Chengzhi & Xiang, Yi & Hao, Wenke & Li, Zhicheng & Qian, Yuchen & Wang, Yuzhuo, 2023. "Automatic recognition and classification of future work sentences from academic articles in a specific domain," Journal of Informetrics, Elsevier, vol. 17(1).
    5. Ehsan Mohammadi & Mike Thelwall, 2013. "Assessing non-standard article impact using F1000 labels," Scientometrics, Springer;Akadémiai Kiadó, vol. 97(2), pages 383-395, November.
    6. Zhongyi Wang & Keying Wang & Jiyue Liu & Jing Huang & Haihua Chen, 2022. "Measuring the innovation of method knowledge elements in scientific literature," Scientometrics, Springer;Akadémiai Kiadó, vol. 127(5), pages 2803-2827, May.
    7. Kevin Heffernan & Simone Teufel, 2018. "Identifying problems and solutions in scientific text," Scientometrics, Springer;Akadémiai Kiadó, vol. 116(2), pages 1367-1382, August.
    8. Shutian Ma & Jin Xu & Chengzhi Zhang, 2018. "Automatic identification of cited text spans: a multi-classifier approach over imbalanced dataset," Scientometrics, Springer;Akadémiai Kiadó, vol. 116(2), pages 1303-1330, August.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Thelwall, Mike & Fairclough, Ruth, 2015. "The influence of time and discipline on the magnitude of correlations between citation counts and quality scores," Journal of Informetrics, Elsevier, vol. 9(3), pages 529-541.
    2. Lutz Bornmann, 2015. "Interrater reliability and convergent validity of F1000Prime peer review," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 66(12), pages 2415-2426, December.
    3. Bornmann, Lutz, 2014. "Do altmetrics point to the broader impact of research? An overview of benefits and disadvantages of altmetrics," Journal of Informetrics, Elsevier, vol. 8(4), pages 895-903.
    4. Bornmann, Lutz & Tekles, Alexander & Zhang, Helena H. & Ye, Fred Y., 2019. "Do we measure novelty when we analyze unusual combinations of cited references? A validation study of bibliometric novelty indicators based on F1000Prime data," Journal of Informetrics, Elsevier, vol. 13(4).
    5. Weixi Xie & Pengfei Jia & Guangyao Zhang & Xianwen Wang, 2024. "Are reviewer scores consistent with citations?," Scientometrics, Springer;Akadémiai Kiadó, vol. 129(8), pages 4721-4740, August.
    6. Mojisola Erdt & Aarthy Nagarajan & Sei-Ching Joanna Sin & Yin-Leng Theng, 2016. "Altmetrics: an analysis of the state-of-the-art in measuring research impact on social media," Scientometrics, Springer;Akadémiai Kiadó, vol. 109(2), pages 1117-1166, November.
    7. Bornmann, Lutz, 2014. "Validity of altmetrics data for measuring societal impact: A study using data from Altmetric and F1000Prime," Journal of Informetrics, Elsevier, vol. 8(4), pages 935-950.
    8. Ehsan Mohammadi & Mike Thelwall & Stefanie Haustein & Vincent Larivière, 2015. "Who reads research articles? An altmetrics analysis of Mendeley user categories," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 66(9), pages 1832-1846, September.
    9. Wang, Shiyun & Mao, Jin & Lu, Kun & Cao, Yujie & Li, Gang, 2021. "Understanding interdisciplinary knowledge integration through citance analysis: A case study on eHealth," Journal of Informetrics, Elsevier, vol. 15(4).
    10. Wang, Peiling & Su, Jing, 2021. "Post-publication expert recommendations in faculty opinions (F1000Prime): Recommended articles and citations," Journal of Informetrics, Elsevier, vol. 15(3).
    11. Iqra Safder & Saeed-Ul Hassan, 2019. "Bibliometric-enhanced information retrieval: a novel deep feature engineering approach for algorithm searching from full-text publications," Scientometrics, Springer;Akadémiai Kiadó, vol. 119(1), pages 257-277, April.
    12. Guillaume Cabanac & Ingo Frommholz & Philipp Mayr, 2018. "Bibliometric-enhanced information retrieval: preface," Scientometrics, Springer;Akadémiai Kiadó, vol. 116(2), pages 1225-1227, August.
    13. Peiling Wang & Joshua Williams & Nan Zhang & Qiang Wu, 2020. "F1000Prime recommended articles and their citations: an exploratory study of four journals," Scientometrics, Springer;Akadémiai Kiadó, vol. 122(2), pages 933-955, February.
    14. Bryce, Cormac & Dowling, Michael & Lucey, Brian, 2020. "The journal quality perception gap," Research Policy, Elsevier, vol. 49(5).
    15. Domingo Docampo & Lawrence Cram, 2019. "Highly cited researchers: a moving target," Scientometrics, Springer;Akadémiai Kiadó, vol. 118(3), pages 1011-1025, March.
    16. Michaela Strinzel & Josh Brown & Wolfgang Kaltenbrunner & Sarah Rijcke & Michael Hill, 2021. "Ten ways to improve academic CVs for fairer research assessment," Palgrave Communications, Palgrave Macmillan, vol. 8(1), pages 1-4, December.
    17. Sten F Odenwald, 2020. "A citation study of earth science projects in citizen science," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-26, July.
    18. Alexander Kalgin & Olga Kalgina & Anna Lebedeva, 2019. "Publication Metrics as a Tool for Measuring Research Productivity and Their Relation to Motivation," Voprosy obrazovaniya / Educational Studies Moscow, National Research University Higher School of Economics, issue 1, pages 44-86.
    19. Gregorio González-Alcaide, 2021. "Bibliometric studies outside the information science and library science field: uncontainable or uncontrollable?," Scientometrics, Springer;Akadémiai Kiadó, vol. 126(8), pages 6837-6870, August.
    20. Ramón A. Feenstra & Emilio Delgado López-Cózar, 2022. "Philosophers’ appraisals of bibliometric indicators and their use in evaluation: from recognition to knee-jerk rejection," Scientometrics, Springer;Akadémiai Kiadó, vol. 127(4), pages 2085-2103, April.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:scient:v:129:y:2024:i:9:d:10.1007_s11192-024-05114-z. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.