IDEAS home Printed from https://ideas.repec.org/a/eee/infome/v13y2019i1p299-313.html
   My bibliography  Save this article

Globalised vs averaged: Bias and ranking performance on the author level

Author

Listed:
  • Dunaiski, Marcel
  • Geldenhuys, Jaco
  • Visser, Willem

Abstract

We analyse the difference between the averaged (average of ratios) and globalised (ratio of averages) author-level aggregation approaches based on various paper-level metrics. We evaluate the aggregation variants in terms of (1) their field bias on the author-level and (2) their ranking performance based on test data that comprises researchers that have received fellowship status or won prestigious awards for their long-lasting and high-impact research contributions to their fields. We consider various direct and indirect paper-level metrics with different normalisation approaches (mean-based, percentile-based, co-citation-based) and focus on the bias and performance differences between the two aggregation variants of each metric. We execute all experiments on two publication databases which use different field categorisation schemes. The first uses author-chosen concept categories and covers the computer science literature. The second covers all disciplines and categorises papers by keywords based on their contents. In terms of bias, we find relatively little difference between the averaged and globalised variants. For mean-normalised citation counts we find no significant difference between the two approaches. However, the percentile-based metric shows less bias with the globalised approach, except for citation windows smaller than four years. On the multi-disciplinary database, PageRank has the overall least bias but shows no significant difference between the two aggregation variants. The averaged variants of most metrics have less bias for small citation windows. For larger citation windows the differences are smaller and are mostly insignificant.

Suggested Citation

  • Dunaiski, Marcel & Geldenhuys, Jaco & Visser, Willem, 2019. "Globalised vs averaged: Bias and ranking performance on the author level," Journal of Informetrics, Elsevier, vol. 13(1), pages 299-313.
  • Handle: RePEc:eee:infome:v:13:y:2019:i:1:p:299-313
    DOI: 10.1016/j.joi.2019.01.006
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S1751157718304498
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.joi.2019.01.006?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Loet Leydesdorff & Lutz Bornmann & Rüdiger Mutz & Tobias Opthof, 2011. "Turning the tables on citation analysis one more time: Principles for comparing sets of documents," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 62(7), pages 1370-1381, July.
    2. Cristian Colliander, 2015. "A novel approach to citation normalization: A similarity-based method for creating reference sets," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 66(3), pages 489-500, March.
    3. A Cecile J W Janssens & Michael Goodman & Kimberly R Powell & Marta Gwinn, 2017. "A critical evaluation of the algorithm behind the Relative Citation Ratio (RCR)," PLOS Biology, Public Library of Science, vol. 15(10), pages 1-5, October.
    4. Ludo Waltman & Nees Jan van Eck, 2012. "A new methodology for constructing a publication‐level classification system of science," Journal of the American Society for Information Science and Technology, Association for Information Science & Technology, vol. 63(12), pages 2378-2392, December.
    5. Fiala, Dalibor & Tutoky, Gabriel, 2017. "PageRank-based prediction of award-winning researchers and the impact of citations," Journal of Informetrics, Elsevier, vol. 11(4), pages 1044-1068.
    6. Juan A. Crespo & Neus Herranz & Yunrong Li & Javier Ruiz-Castillo, 2014. "The effect on citation inequality of differences in citation practices at the web of science subject category level," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 65(6), pages 1244-1256, June.
    7. Jevin D. West & Michael C. Jensen & Ralph J. Dandrea & Gregory J. Gordon & Carl T. Bergstrom, 2013. "Author‐level Eigenfactor metrics: Evaluating the influence of authors, institutions, and countries within the social science research network community," Journal of the American Society for Information Science and Technology, Association for Information Science & Technology, vol. 64(4), pages 787-801, April.
    8. Fiala, Dalibor & Šubelj, Lovro & Žitnik, Slavko & Bajec, Marko, 2015. "Do PageRank-based author rankings outperform simple citation counts?," Journal of Informetrics, Elsevier, vol. 9(2), pages 334-348.
    9. Mariani, Manuel Sebastian & Medo, Matúš & Zhang, Yi-Cheng, 2016. "Identification of milestone papers through time-balanced network centrality," Journal of Informetrics, Elsevier, vol. 10(4), pages 1207-1223.
    10. Thed N. van Leeuwen & Clara Calero Medina, 2012. "Redefining the field of economics: Improving field normalization for the application of bibliometric techniques in the field of economics," Research Evaluation, Oxford University Press, vol. 21(1), pages 61-70, February.
    11. Bornmann, Lutz & Marx, Werner, 2015. "Methods for the generation of normalized citation impact scores in bibliometrics: Which method best reflects the judgements of experts?," Journal of Informetrics, Elsevier, vol. 9(2), pages 408-418.
    12. Bornmann, Lutz & Leydesdorff, Loet & Mutz, Rüdiger, 2013. "The use of percentiles and percentile rank classes in the analysis of bibliometric data: Opportunities and limits," Journal of Informetrics, Elsevier, vol. 7(1), pages 158-165.
    13. Ludo Waltman & Nees Jan Eck & Thed N. Leeuwen & Martijn S. Visser & Anthony F. J. Raan, 2011. "Towards a new crown indicator: an empirical analysis," Scientometrics, Springer;Akadémiai Kiadó, vol. 87(3), pages 467-481, June.
    14. Jonathan Adams & Karen Gurney & Louise Jackson, 2008. "Calibrating the zoom — a test of Zitt’s hypothesis," Scientometrics, Springer;Akadémiai Kiadó, vol. 75(1), pages 81-95, April.
    15. Ludo Waltman & Nees Jan Eck, 2012. "A new methodology for constructing a publication-level classification system of science," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 63(12), pages 2378-2392, December.
    16. Bornmann, Lutz & Marx, Werner, 2018. "Critical rationalism and the search for standard (field-normalized) indicators in bibliometrics," Journal of Informetrics, Elsevier, vol. 12(3), pages 598-604.
    17. Vinkler, Péter, 2012. "The case of scientometricians with the “absolute relative” impact indicator," Journal of Informetrics, Elsevier, vol. 6(2), pages 254-264.
    18. Michel Zitt & Suzy Ramanana-Rahary & Elise Bassecoulard, 2005. "Relativity of citation performance and excellence measures: From cross-field to cross-scale effects of field-normalisation," Scientometrics, Springer;Akadémiai Kiadó, vol. 63(2), pages 373-401, April.
    19. Larivière, Vincent & Gingras, Yves, 2011. "Averages of ratios vs. ratios of averages: An empirical analysis of four levels of aggregation," Journal of Informetrics, Elsevier, vol. 5(3), pages 392-399.
    20. Smolinsky, Lawrence, 2016. "Expected number of citations and the crown indicator," Journal of Informetrics, Elsevier, vol. 10(1), pages 43-47.
    21. Silva, F.N. & Rodrigues, F.A. & Oliveira, O.N. & da F. Costa, L., 2013. "Quantifying the interdisciplinarity of scientific journals and fields," Journal of Informetrics, Elsevier, vol. 7(2), pages 469-477.
    22. Dunaiski, Marcel & Visser, Willem & Geldenhuys, Jaco, 2016. "Evaluating paper and author ranking algorithms using impact and contribution awards," Journal of Informetrics, Elsevier, vol. 10(2), pages 392-407.
    23. Nykl, Michal & Ježek, Karel & Fiala, Dalibor & Dostal, Martin, 2014. "PageRank variants in the evaluation of citation networks," Journal of Informetrics, Elsevier, vol. 8(3), pages 683-692.
    24. Dalibor Fiala & François Rousselot & Karel Ježek, 2008. "PageRank for bibliographic networks," Scientometrics, Springer;Akadémiai Kiadó, vol. 76(1), pages 135-158, July.
    25. Ruiz-Castillo, Javier & Waltman, Ludo, 2015. "Field-normalized citation impact indicators using algorithmically constructed classification systems of science," Journal of Informetrics, Elsevier, vol. 9(1), pages 102-117.
    26. Lundberg, Jonas, 2007. "Lifting the crown—citation z-score," Journal of Informetrics, Elsevier, vol. 1(2), pages 145-154.
    27. Ying Ding & Erjia Yan & Arthur Frazho & James Caverlee, 2009. "PageRank for ranking authors in co‐citation networks," Journal of the American Society for Information Science and Technology, Association for Information Science & Technology, vol. 60(11), pages 2229-2243, November.
    28. Radicchi, Filippo & Castellano, Claudio, 2012. "Testing the fairness of citation indicators for comparison across scientific domains: The case of fractional citation counts," Journal of Informetrics, Elsevier, vol. 6(1), pages 121-130.
    29. Dunaiski, Marcel & Geldenhuys, Jaco & Visser, Willem, 2018. "How to evaluate rankings of academic entities using test data," Journal of Informetrics, Elsevier, vol. 12(3), pages 631-655.
    30. Waltman, Ludo & van Eck, Nees Jan & van Leeuwen, Thed N. & Visser, Martijn S. & van Raan, Anthony F.J., 2011. "Towards a new crown indicator: Some theoretical considerations," Journal of Informetrics, Elsevier, vol. 5(1), pages 37-47.
    31. Opthof, Tobias & Leydesdorff, Loet, 2010. "Caveats for the journal and field normalizations in the CWTS (“Leiden”) evaluations of research performance," Journal of Informetrics, Elsevier, vol. 4(3), pages 423-430.
    32. Chao Gao & Zhen Wang & Xianghua Li & Zili Zhang & Wei Zeng, 2016. "PR-Index: Using the h-Index and PageRank for Determining True Impact," PLOS ONE, Public Library of Science, vol. 11(9), pages 1-13, September.
    33. Fiala, Dalibor, 2012. "Time-aware PageRank for bibliographic networks," Journal of Informetrics, Elsevier, vol. 6(3), pages 370-388.
    34. Ludo Waltman & Michael Schreiber, 2013. "On the calculation of percentile-based bibliometric indicators," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 64(2), pages 372-379, February.
    35. Dunaiski, Marcel & Geldenhuys, Jaco & Visser, Willem, 2018. "Author ranking evaluation at scale," Journal of Informetrics, Elsevier, vol. 12(3), pages 679-702.
    36. Nykl, Michal & Campr, Michal & Ježek, Karel, 2015. "Author ranking based on personalized PageRank," Journal of Informetrics, Elsevier, vol. 9(4), pages 777-799.
    37. Ludo Waltman & Erjia Yan & Nees Jan Eck, 2011. "A recursive field-normalized bibliometric performance indicator: an application to the field of library and information science," Scientometrics, Springer;Akadémiai Kiadó, vol. 89(1), pages 301-314, October.
    38. Chen, P. & Xie, H. & Maslov, S. & Redner, S., 2007. "Finding scientific gems with Google’s PageRank algorithm," Journal of Informetrics, Elsevier, vol. 1(1), pages 8-15.
    39. Ludo Waltman & Michael Schreiber, 2013. "On the calculation of percentile‐based bibliometric indicators," Journal of the American Society for Information Science and Technology, Association for Information Science & Technology, vol. 64(2), pages 372-379, February.
    40. Sven E. Hug & Michael Ochsner & Martin P. Brändle, 2017. "Citation analysis with microsoft academic," Scientometrics, Springer;Akadémiai Kiadó, vol. 111(1), pages 371-378, April.
    41. B Ian Hutchins & Xin Yuan & James M Anderson & George M Santangelo, 2016. "Relative Citation Ratio (RCR): A New Metric That Uses Citation Rates to Measure Influence at the Article Level," PLOS Biology, Public Library of Science, vol. 14(9), pages 1-25, September.
    42. Vaccario, Giacomo & Medo, Matúš & Wider, Nicolas & Mariani, Manuel Sebastian, 2017. "Quantifying and suppressing ranking bias in a large citation network," Journal of Informetrics, Elsevier, vol. 11(3), pages 766-782.
    43. Colliander, Cristian & Ahlgren, Per, 2011. "The effects and their stability of field normalization baseline on relative performance with respect to citation impact: A case study of 20 natural science departments," Journal of Informetrics, Elsevier, vol. 5(1), pages 101-113.
    44. Waltman, Ludo, 2016. "A review of the literature on citation impact indicators," Journal of Informetrics, Elsevier, vol. 10(2), pages 365-391.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Parul Khurana & Kiran Sharma, 2022. "Impact of h-index on author’s rankings: an improvement to the h-index for lower-ranked authors," Scientometrics, Springer;Akadémiai Kiadó, vol. 127(8), pages 4483-4498, August.
    2. Xu, Shuqi & Mariani, Manuel Sebastian & Lü, Linyuan & Medo, Matúš, 2020. "Unbiased evaluation of ranking metrics reveals consistent performance in science and technology citation data," Journal of Informetrics, Elsevier, vol. 14(1).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Dunaiski, Marcel & Geldenhuys, Jaco & Visser, Willem, 2019. "On the interplay between normalisation, bias, and performance of paper impact metrics," Journal of Informetrics, Elsevier, vol. 13(1), pages 270-290.
    2. Waltman, Ludo, 2016. "A review of the literature on citation impact indicators," Journal of Informetrics, Elsevier, vol. 10(2), pages 365-391.
    3. Bouyssou, Denis & Marchant, Thierry, 2016. "Ranking authors using fractional counting of citations: An axiomatic approach," Journal of Informetrics, Elsevier, vol. 10(1), pages 183-199.
    4. Liwei Cai & Jiahao Tian & Jiaying Liu & Xiaomei Bai & Ivan Lee & Xiangjie Kong & Feng Xia, 2019. "Scholarly impact assessment: a survey of citation weighting solutions," Scientometrics, Springer;Akadémiai Kiadó, vol. 118(2), pages 453-478, February.
    5. Dunaiski, Marcel & Geldenhuys, Jaco & Visser, Willem, 2018. "Author ranking evaluation at scale," Journal of Informetrics, Elsevier, vol. 12(3), pages 679-702.
    6. Vaccario, Giacomo & Medo, Matúš & Wider, Nicolas & Mariani, Manuel Sebastian, 2017. "Quantifying and suppressing ranking bias in a large citation network," Journal of Informetrics, Elsevier, vol. 11(3), pages 766-782.
    7. Dunaiski, Marcel & Geldenhuys, Jaco & Visser, Willem, 2018. "How to evaluate rankings of academic entities using test data," Journal of Informetrics, Elsevier, vol. 12(3), pages 631-655.
    8. Lutz Bornmann & Alexander Tekles & Loet Leydesdorff, 2019. "How well does I3 perform for impact measurement compared to other bibliometric indicators? The convergent validity of several (field-normalized) indicators," Scientometrics, Springer;Akadémiai Kiadó, vol. 119(2), pages 1187-1205, May.
    9. Xu, Shuqi & Mariani, Manuel Sebastian & Lü, Linyuan & Medo, Matúš, 2020. "Unbiased evaluation of ranking metrics reveals consistent performance in science and technology citation data," Journal of Informetrics, Elsevier, vol. 14(1).
    10. Bornmann, Lutz & Haunschild, Robin, 2016. "Citation score normalized by cited references (CSNCR): The introduction of a new citation impact indicator," Journal of Informetrics, Elsevier, vol. 10(3), pages 875-887.
    11. Waltman, Ludo & van Eck, Nees Jan, 2013. "A systematic empirical comparison of different approaches for normalizing citation impact indicators," Journal of Informetrics, Elsevier, vol. 7(4), pages 833-849.
    12. Mingers, John & Leydesdorff, Loet, 2015. "A review of theory and practice in scientometrics," European Journal of Operational Research, Elsevier, vol. 246(1), pages 1-19.
    13. Bornmann, Lutz & Marx, Werner, 2018. "Critical rationalism and the search for standard (field-normalized) indicators in bibliometrics," Journal of Informetrics, Elsevier, vol. 12(3), pages 598-604.
    14. Ruiz-Castillo, Javier & Waltman, Ludo, 2015. "Field-normalized citation impact indicators using algorithmically constructed classification systems of science," Journal of Informetrics, Elsevier, vol. 9(1), pages 102-117.
    15. Hu, Zhigang & Tian, Wencan & Xu, Shenmeng & Zhang, Chunbo & Wang, Xianwen, 2018. "Four pitfalls in normalizing citation indicators: An investigation of ESI’s selection of highly cited papers," Journal of Informetrics, Elsevier, vol. 12(4), pages 1133-1145.
    16. Bornmann, Lutz & Haunschild, Robin & Mutz, Rüdiger, 2020. "Should citations be field-normalized in evaluative bibliometrics? An empirical analysis based on propensity score matching," Journal of Informetrics, Elsevier, vol. 14(4).
    17. Schneider, Jesper W., 2013. "Caveats for using statistical significance tests in research assessments," Journal of Informetrics, Elsevier, vol. 7(1), pages 50-62.
    18. Dunaiski, Marcel & Visser, Willem & Geldenhuys, Jaco, 2016. "Evaluating paper and author ranking algorithms using impact and contribution awards," Journal of Informetrics, Elsevier, vol. 10(2), pages 392-407.
    19. Pislyakov, Vladimir, 2022. "On some properties of medians, percentiles, baselines, and thresholds in empirical bibliometric analysis," Journal of Informetrics, Elsevier, vol. 16(4).
    20. Abramo, Giovanni & D’Angelo, Ciriaco Andrea, 2016. "A farewell to the MNCS and like size-independent indicators," Journal of Informetrics, Elsevier, vol. 10(2), pages 646-651.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:infome:v:13:y:2019:i:1:p:299-313. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/joi .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.