IDEAS home Printed from https://ideas.repec.org/a/plo/pone00/0158002.html
   My bibliography  Save this article

Pedagogical Merit Review of Animal Use for Education in Canada

Author

Listed:
  • Marc T Avey
  • Gilly Griffin

Abstract

There are two components to the review of animal based protocols in Canada: review for the merit of the study itself, and review of the ethical acceptability of the work. Despite the perceived importance for the quality assurance these reviews provide; there are few studies of the peer-based merit review system for animal-based protocols for research and education. Institutional animal care committees (ACC)s generally rely on the external peer review of scientific merit for animal-based research. In contrast, peer review for animal based teaching/training is dependent on the review of pedagogical merit carried out by the ACC itself or another committee within the institution. The objective of this study was to evaluate the views of ACC members about current practices and policies as well as alternate policies for the review of animal based teaching/training. We conducted a national web-based survey of ACC members with both quantitative and qualitative response options. Responses from 167 ACC members indicated broad concerns about administrative burden despite strong support for both the current and alternate policies. Participants’ comments focused mostly on the merit review process (54%) relative to the efficiency (21%), impact (13%), and other (12%) aspects of evaluation. Approximately half (49%) of the comments were classified into emergent themes that focused on some type of burden: burden from additional pedagogical merit review (16%), a limited need for the review (12%), and a lack of resources (expertise 11%; people/money 10%). Participants indicated that the current system for pedagogical merit review is effective (60%); but most also indicated that there was at least some challenge (86%) with the current peer review process. There was broad support for additional guidance on the justification, criteria, types of animal use, and objectives of pedagogical merit review. Participants also supported the ethical review and application of the Three Rs in the review process. A clear priority from participants in the survey was updating guidance to better facilitate the merit review process of animal-based protocols for education. Balancing the need for improved guidance with the reality of limited resources at local institutions will be essential to do this successfully; a familiar dilemma to both scientists and policy makers alike.

Suggested Citation

  • Marc T Avey & Gilly Griffin, 2016. "Pedagogical Merit Review of Animal Use for Education in Canada," PLOS ONE, Public Library of Science, vol. 11(6), pages 1-15, June.
  • Handle: RePEc:plo:pone00:0158002
    DOI: 10.1371/journal.pone.0158002
    as

    Download full text from publisher

    File URL: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0158002
    Download Restriction: no

    File URL: https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0158002&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pone.0158002?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Lutz Bornmann & Gerlind Wallon & Anna Ledin, 2008. "Does the Committee Peer Review Select the Best Applicants for Funding? An Investigation of the Selection Process for Two European Molecular Biology Organization Programmes," PLOS ONE, Public Library of Science, vol. 3(10), pages 1-11, October.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Bornmann, Lutz & Mutz, Rüdiger & Daniel, Hans-Dieter, 2010. "The h index research output measurement: Two approaches to enhance its accuracy," Journal of Informetrics, Elsevier, vol. 4(3), pages 407-414.
    2. Annita Nugent & Ho Fai Chan & Uwe Dulleck, 2022. "Government funding of university-industry collaboration: exploring the impact of targeted funding on university patent activity," Scientometrics, Springer;Akadémiai Kiadó, vol. 127(1), pages 29-73, January.
    3. van den Besselaar, Peter & Sandström, Ulf, 2015. "Early career grants, performance, and careers: A study on predictive validity of grant decisions," Journal of Informetrics, Elsevier, vol. 9(4), pages 826-838.
    4. Marcin Kozak & Lutz Bornmann, 2012. "A New Family of Cumulative Indexes for Measuring Scientific Performance," PLOS ONE, Public Library of Science, vol. 7(10), pages 1-4, October.
    5. Bornmann, Lutz & Daniel, Hans-Dieter, 2009. "Extent of type I and type II errors in editorial decisions: A case study on Angewandte Chemie International Edition," Journal of Informetrics, Elsevier, vol. 3(4), pages 348-352.
    6. Hendy Abdoul & Christophe Perrey & Philippe Amiel & Florence Tubach & Serge Gottot & Isabelle Durand-Zaleski & Corinne Alberti, 2012. "Peer Review of Grant Applications: Criteria Used and Qualitative Study of Reviewer Practices," PLOS ONE, Public Library of Science, vol. 7(9), pages 1-15, September.
    7. Eric Libby & Leon Glass, 2010. "The Calculus of Committee Composition," PLOS ONE, Public Library of Science, vol. 5(9), pages 1-8, September.
    8. Yu-Wei Chang & Dar-Zen Chen & Mu-Hsuan Huang, 2021. "Do extraordinary science and technology scientists balance their publishing and patenting activities?," PLOS ONE, Public Library of Science, vol. 16(11), pages 1-20, November.
    9. Madsen, Emil Bargmann & Andersen, Jens Peter, 2024. "Funding priorities and health outcomes in Danish medical research," Social Science & Medicine, Elsevier, vol. 360(C).
    10. Kok, Holmer & Faems, Dries & de Faria, Pedro, 2022. "Pork Barrel or Barrel of Gold? Examining the performance implications of earmarking in public R&D grants," Research Policy, Elsevier, vol. 51(7).
    11. Kevin W. Boyack & Caleb Smith & Richard Klavans, 2018. "Toward predicting research proposal success," Scientometrics, Springer;Akadémiai Kiadó, vol. 114(2), pages 449-461, February.
    12. Bornmann, Lutz & Leydesdorff, Loet & Van den Besselaar, Peter, 2010. "A meta-evaluation of scientific research proposals: Different ways of comparing rejected to awarded applications," Journal of Informetrics, Elsevier, vol. 4(3), pages 211-220.
    13. Filippo Radicchi & Claudio Castellano, 2013. "Analysis of bibliometric indicators for individual scholars in a large data set," Scientometrics, Springer;Akadémiai Kiadó, vol. 97(3), pages 627-637, December.
    14. Gemma Elizabeth Derrick & Alessandra Zimmermann & Helen Greaves & Jonathan Best & Richard Klavans, 2024. "Targeted, actionable and fair: Reviewer reports as feedback and its effect on ECR career choices," Research Evaluation, Oxford University Press, vol. 32(4), pages 648-657.
    15. Benda, Wim G.G. & Engels, Tim C.E., 2011. "The predictive validity of peer review: A selective review of the judgmental forecasting qualities of peers, and implications for innovation in science," International Journal of Forecasting, Elsevier, vol. 27(1), pages 166-182.
    16. Dennis L Murray & Douglas Morris & Claude Lavoie & Peter R Leavitt & Hugh MacIsaac & Michael E J Masson & Marc-Andre Villard, 2016. "Bias in Research Grant Evaluation Has Dire Consequences for Small Universities," PLOS ONE, Public Library of Science, vol. 11(6), pages 1-19, June.
    17. Radicchi, Filippo & Castellano, Claudio, 2012. "Testing the fairness of citation indicators for comparison across scientific domains: The case of fractional citation counts," Journal of Informetrics, Elsevier, vol. 6(1), pages 121-130.
    18. Richard R Snell, 2015. "Menage a Quoi? Optimal Number of Peer Reviewers," PLOS ONE, Public Library of Science, vol. 10(4), pages 1-14, April.
    19. T. S. Evans & N. Hopkins & B. S. Kaube, 2012. "Universality of performance indicators based on citation and reference counts," Scientometrics, Springer;Akadémiai Kiadó, vol. 93(2), pages 473-495, November.
    20. Benda, Wim G.G. & Engels, Tim C.E., 2011. "The predictive validity of peer review: A selective review of the judgmental forecasting qualities of peers, and implications for innovation in science," International Journal of Forecasting, Elsevier, vol. 27(1), pages 166-182, January.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0158002. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.