IDEAS home Printed from https://ideas.repec.org/a/eee/csdana/v55y2011i4p1828-1844.html
   My bibliography  Save this article

An experimental comparison of cross-validation techniques for estimating the area under the ROC curve

Author

Listed:
  • Airola, Antti
  • Pahikkala, Tapio
  • Waegeman, Willem
  • De Baets, Bernard
  • Salakoski, Tapio

Abstract

Reliable estimation of the classification performance of inferred predictive models is difficult when working with small data sets. Cross-validation is in this case a typical strategy for estimating the performance. However, many standard approaches to cross-validation suffer from extensive bias or variance when the area under the ROC curve (AUC) is used as the performance measure. This issue is explored through an extensive simulation study. Leave-pair-out cross-validation is proposed for conditional AUC-estimation, as it is almost unbiased, and its deviation variance is as low as that of the best alternative approaches. When using regularized least-squares based learners, efficient algorithms exist for calculating the leave-pair-out cross-validation estimate.

Suggested Citation

  • Airola, Antti & Pahikkala, Tapio & Waegeman, Willem & De Baets, Bernard & Salakoski, Tapio, 2011. "An experimental comparison of cross-validation techniques for estimating the area under the ROC curve," Computational Statistics & Data Analysis, Elsevier, vol. 55(4), pages 1828-1844, April.
  • Handle: RePEc:eee:csdana:v:55:y:2011:i:4:p:1828-1844
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0167-9473(10)00446-9
    Download Restriction: Full text for ScienceDirect subscribers only.
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Rosa A. Schiavo & David J. Hand, 2000. "Ten More Years of Error Rate Research," International Statistical Review, International Statistical Institute, vol. 68(3), pages 295-310, December.
    2. Kim, Ji-Hyun, 2009. "Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap," Computational Statistics & Data Analysis, Elsevier, vol. 53(11), pages 3735-3745, September.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Riikka Numminen & Ileana Montoya Perez & Ivan Jambor & Tapio Pahikkala & Antti Airola, 2023. "Quicksort leave-pair-out cross-validation for ROC curve analysis," Computational Statistics, Springer, vol. 38(3), pages 1579-1595, September.
    2. Zatonatska Tetiana & Artyukh Tatiana & Tymchenko Kateryna & Dluhopolskyi Oleksandr, 2022. "Forecasting the Behavior of Target Segments to Activate Advertising Tools: Case of Mobile Operator Vodafone Ukraine," Economics, Sciendo, vol. 10(1), pages 87-104, June.
    3. Coolen-Maturi, Tahani & Elkhafifi, Faiza F. & Coolen, Frank P.A., 2014. "Three-group ROC analysis: A nonparametric predictive approach," Computational Statistics & Data Analysis, Elsevier, vol. 78(C), pages 69-81.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Conde David & Salvador Bonifacio & Rueda Cristina & Fernández Miguel A., 2013. "Performance and estimation of the true error rate of classification rules built with additional information. An application to a cancer trial," Statistical Applications in Genetics and Molecular Biology, De Gruyter, vol. 12(5), pages 583-602, October.
    2. John J Nay & Yevgeniy Vorobeychik, 2016. "Predicting Human Cooperation," PLOS ONE, Public Library of Science, vol. 11(5), pages 1-19, May.
    3. Matthew Tuson & Berwin Turlach & Kevin Murray & Mei Ruu Kok & Alistair Vickery & David Whyatt, 2021. "Predicting Future Geographic Hotspots of Potentially Preventable Hospitalisations Using All Subset Model Selection and Repeated K-Fold Cross-Validation," IJERPH, MDPI, vol. 18(19), pages 1-21, September.
    4. Gonzalo Perez-de-la-Cruz & Guillermina Eslava-Gomez, 2019. "Discriminant analysis for discrete variables derived from a tree-structured graphical model," Advances in Data Analysis and Classification, Springer;German Classification Society - Gesellschaft für Klassifikation (GfKl);Japanese Classification Society (JCS);Classification and Data Analysis Group of the Italian Statistical Society (CLADAG);International Federation of Classification Societies (IFCS), vol. 13(4), pages 855-876, December.
    5. I. Charvet & A. Suppasri & H. Kimura & D. Sugawara & F. Imamura, 2015. "A multivariate generalized linear tsunami fragility model for Kesennuma City based on maximum flow depths, velocities and debris impact, with evaluation of predictive accuracy," Natural Hazards: Journal of the International Society for the Prevention and Mitigation of Natural Hazards, Springer;International Society for the Prevention and Mitigation of Natural Hazards, vol. 79(3), pages 2073-2099, December.
    6. Khan, Jafar A. & Van Aelst, Stefan & Zamar, Ruben H., 2010. "Fast robust estimation of prediction error based on resampling," Computational Statistics & Data Analysis, Elsevier, vol. 54(12), pages 3121-3130, December.
    7. Mark Lown & Michael Brown & Chloë Brown & Arthur M Yue & Benoy N Shah & Simon J Corbett & George Lewith & Beth Stuart & Michael Moore & Paul Little, 2020. "Machine learning detection of Atrial Fibrillation using wearable technology," PLOS ONE, Public Library of Science, vol. 15(1), pages 1-9, January.
    8. Piccarreta, Raffaella, 2010. "Binary trees for dissimilarity data," Computational Statistics & Data Analysis, Elsevier, vol. 54(6), pages 1516-1524, June.
    9. Ha, Tran Vinh & Asada, Takumi & Arimura, Mikiharu, 2019. "Determination of the influence factors on household vehicle ownership patterns in Phnom Penh using statistical and machine learning methods," Journal of Transport Geography, Elsevier, vol. 78(C), pages 70-86.
    10. Zhengnan Huang & Hongjiu Zhang & Jonathan Boss & Stephen A Goutman & Bhramar Mukherjee & Ivo D Dinov & Yuanfang Guan & for the Pooled Resource Open-Access ALS Clinical Trials Consortium, 2017. "Complete hazard ranking to analyze right-censored data: An ALS survival study," PLOS Computational Biology, Public Library of Science, vol. 13(12), pages 1-21, December.
    11. Xue, Jing-Hao & Titterington, D. Michael, 2010. "On the generative-discriminative tradeoff approach: Interpretation, asymptotic efficiency and classification performance," Computational Statistics & Data Analysis, Elsevier, vol. 54(2), pages 438-451, February.
    12. Gianluca Gazzola & Myong K. Jeong, 2021. "Support vector regression for polyhedral and missing data," Annals of Operations Research, Springer, vol. 303(1), pages 483-506, August.
    13. Ayed Alwadain & Rao Faizan Ali & Amgad Muneer, 2023. "Estimating Financial Fraud through Transaction-Level Features and Machine Learning," Mathematics, MDPI, vol. 11(5), pages 1-15, February.
    14. John J. Nay & Yevgeniy Vorobeychik, 2016. "Predicting Human Cooperation," Papers 1601.07792, arXiv.org, revised Apr 2016.
    15. Shusaku Tsumoto & Tomohiro Kimura & Shoji Hirano, 2022. "Expectation–Maximization (EM) Clustering as a Preprocessing Method for Clinical Pathway Mining," The Review of Socionetwork Strategies, Springer, vol. 16(1), pages 25-52, April.
    16. Zachary K. Collier & Haobai Zhang & Bridgette Johnson, 2021. "Finite Mixture Modeling for Program Evaluation: Resampling and Pre-processing Approaches," Evaluation Review, , vol. 45(6), pages 309-333, December.
    17. Dean Fantazzini & Yufeng Xiao, 2023. "Detecting Pump-and-Dumps with Crypto-Assets: Dealing with Imbalanced Datasets and Insiders’ Anticipated Purchases," Econometrics, MDPI, vol. 11(3), pages 1-73, August.
    18. Zhijian Wang & Likang Zheng & Junyuan Wang & Wenhua Du, 2019. "Research on Novel Bearing Fault Diagnosis Method Based on Improved Krill Herd Algorithm and Kernel Extreme Learning Machine," Complexity, Hindawi, vol. 2019, pages 1-19, November.
    19. Pyzhov, Vladislav & Pyzhov, Stanislav, 2017. "Comparison of methods of data mining techniques for the predictive accuracy," MPRA Paper 79326, University Library of Munich, Germany.
    20. Mark G E White & Neil E Bezodis & Jonathon Neville & Huw Summers & Paul Rees, 2022. "Determining jumping performance from a single body-worn accelerometer using machine learning," PLOS ONE, Public Library of Science, vol. 17(2), pages 1-25, February.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:csdana:v:55:y:2011:i:4:p:1828-1844. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/csda .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.