IDEAS home Printed from https://ideas.repec.org/a/eee/csdana/v181y2023ics0167947322002699.html
   My bibliography  Save this article

Efficient permutation testing of variable importance measures by the example of random forests

Author

Listed:
  • Hapfelmeier, Alexander
  • Hornung, Roman
  • Haller, Bernhard

Abstract

Hypothesis testing of variable importance measures (VIMPs) is still the subject of ongoing research. This particularly applies to random forests (RF), for which VIMPs are a popular feature. Among recent developments, heuristic approaches to parametric testing have been proposed whose distributional assumptions are based on empirical evidence. Other formal tests under regularity conditions were derived analytically. But these approaches can be computationally expensive or even practically infeasible. This problem also occurs with non-parametric permutation tests, which are, however, distribution-free and can generically be applied to any kind of prediction model and VIMP. Embracing this advantage, it is proposed to use sequential permutation tests and sequential p-value estimation to reduce the computational costs associated with conventional permutation tests. These costs can be particularly high in case of complex prediction models. Therefore, RF's popular and widely used permutation VIMP (pVIMP) serves as a practical and relevant application example. The results of simulation studies confirm the theoretical properties of the sequential tests, that is, the type-I error probability is controlled at a nominal level and a high power is maintained with considerably fewer permutations needed compared to conventional permutation testing. The numerical stability of the methods is investigated in two additional application studies. In summary, theoretically sound sequential permutation testing of VIMP is possible at greatly reduced computational costs. Recommendations for application are given. A respective implementation for RF's pVIMP is provided through the accompanying R package rfvimptest.

Suggested Citation

  • Hapfelmeier, Alexander & Hornung, Roman & Haller, Bernhard, 2023. "Efficient permutation testing of variable importance measures by the example of random forests," Computational Statistics & Data Analysis, Elsevier, vol. 181(C).
  • Handle: RePEc:eee:csdana:v:181:y:2023:i:c:s0167947322002699
    DOI: 10.1016/j.csda.2022.107689
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0167947322002699
    Download Restriction: Full text for ScienceDirect subscribers only.

    File URL: https://libkey.io/10.1016/j.csda.2022.107689?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Wright, Marvin N. & Ziegler, Andreas, 2017. "ranger: A Fast Implementation of Random Forests for High Dimensional Data in C++ and R," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 77(i01).
    2. Strobl, Carolin & Boulesteix, Anne-Laure & Augustin, Thomas, 2007. "Unbiased split selection for classification trees based on the Gini Index," Computational Statistics & Data Analysis, Elsevier, vol. 52(1), pages 483-501, September.
    3. Bommert, Andrea & Sun, Xudong & Bischl, Bernd & Rahnenführer, Jörg & Lang, Michel, 2020. "Benchmark for filter methods for feature selection in high-dimensional classification data," Computational Statistics & Data Analysis, Elsevier, vol. 143(C).
    4. Hapfelmeier, A. & Ulm, K., 2014. "Variable selection by Random Forests using data with missing values," Computational Statistics & Data Analysis, Elsevier, vol. 80(C), pages 129-139.
    5. Hapfelmeier, A. & Hothorn, T. & Ulm, K., 2012. "Recursive partitioning on incomplete data using surrogate decisions and multiple imputation," Computational Statistics & Data Analysis, Elsevier, vol. 56(6), pages 1552-1565.
    6. Markus Loecher, 2022. "Unbiased variable importance for random forests," Communications in Statistics - Theory and Methods, Taylor & Francis Journals, vol. 51(5), pages 1413-1425, March.
    7. Silke Janitza & Ender Celik & Anne-Laure Boulesteix, 2018. "A computationally fast variable importance test for random forests for high-dimensional data," Advances in Data Analysis and Classification, Springer;German Classification Society - Gesellschaft für Klassifikation (GfKl);Japanese Classification Society (JCS);Classification and Data Analysis Group of the Italian Statistical Society (CLADAG);International Federation of Classification Societies (IFCS), vol. 12(4), pages 885-915, December.
    8. Hapfelmeier, A. & Ulm, K., 2013. "A new variable selection approach using Random Forests," Computational Statistics & Data Analysis, Elsevier, vol. 60(C), pages 50-69.
    9. Silke Janitza & Roman Hornung, 2018. "On the overestimation of random forest’s out-of-bag error," PLOS ONE, Public Library of Science, vol. 13(8), pages 1-31, August.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Hapfelmeier, A. & Ulm, K., 2014. "Variable selection by Random Forests using data with missing values," Computational Statistics & Data Analysis, Elsevier, vol. 80(C), pages 129-139.
    2. Fellinghauer, Bernd & Bühlmann, Peter & Ryffel, Martin & von Rhein, Michael & Reinhardt, Jan D., 2013. "Stable graphical model estimation with Random Forests for discrete, continuous, and mixed variables," Computational Statistics & Data Analysis, Elsevier, vol. 64(C), pages 132-152.
    3. Florian Pargent & Florian Pfisterer & Janek Thomas & Bernd Bischl, 2022. "Regularized target encoding outperforms traditional methods in supervised machine learning with high cardinality features," Computational Statistics, Springer, vol. 37(5), pages 2671-2692, November.
    4. Massimiliano Fessina & Giambattista Albora & Andrea Tacchella & Andrea Zaccaria, 2022. "Which products activate a product? An explainable machine learning approach," Papers 2212.03094, arXiv.org.
    5. Hornung, Roman & Boulesteix, Anne-Laure, 2022. "Interaction forests: Identifying and exploiting interpretable quantitative and qualitative interaction effects," Computational Statistics & Data Analysis, Elsevier, vol. 171(C).
    6. Saiedeh Haji-Maghsoudi & Azam Rastegari & Behshid Garrusi & Mohammad Reza Baneshi, 2018. "Addressing the problem of missing data in decision tree modeling," Journal of Applied Statistics, Taylor & Francis Journals, vol. 45(3), pages 547-557, February.
    7. Weijun Wang & Dan Zhao & Liguo Fan & Yulong Jia, 2019. "Study on Icing Prediction of Power Transmission Lines Based on Ensemble Empirical Mode Decomposition and Feature Selection Optimized Extreme Learning Machine," Energies, MDPI, vol. 12(11), pages 1-21, June.
    8. Jörg Kalbfuß & Reto Odermatt & Alois Stutzer, 2018. "Medical marijuana laws and mental health in the United States," CEP Discussion Papers dp1546, Centre for Economic Performance, LSE.
    9. Albert Stuart Reece & Gary Kenneth Hulse, 2022. "European Epidemiological Patterns of Cannabis- and Substance-Related Congenital Neurological Anomalies: Geospatiotemporal and Causal Inferential Study," IJERPH, MDPI, vol. 20(1), pages 1-35, December.
    10. Kandula, Shanthan & Krishnamoorthy, Srikumar & Roy, Debjit, 2020. "A Predictive and Prescriptive Analytics Framework for Efficient E-Commerce Order Delivery," IIMA Working Papers WP 2020-11-01, Indian Institute of Management Ahmedabad, Research and Publication Department.
    11. Van Belle, Jente & Guns, Tias & Verbeke, Wouter, 2021. "Using shared sell-through data to forecast wholesaler demand in multi-echelon supply chains," European Journal of Operational Research, Elsevier, vol. 288(2), pages 466-479.
    12. Philipp Bach & Victor Chernozhukov & Malte S. Kurz & Martin Spindler & Sven Klaassen, 2021. "DoubleML -- An Object-Oriented Implementation of Double Machine Learning in R," Papers 2103.09603, arXiv.org, revised Feb 2024.
    13. Marchetto, Elisa & Da Re, Daniele & Tordoni, Enrico & Bazzichetto, Manuele & Zannini, Piero & Celebrin, Simone & Chieffallo, Ludovico & Malavasi, Marco & Rocchini, Duccio, 2023. "Testing the effect of sample prevalence and sampling methods on probability- and favourability-based SDMs," Ecological Modelling, Elsevier, vol. 477(C).
    14. Eeva-Katri Kumpula & Pauline Norris & Adam C Pomerleau, 2020. "Stocks of paracetamol products stored in urban New Zealand households: A cross-sectional study," PLOS ONE, Public Library of Science, vol. 15(6), pages 1-11, June.
    15. Michael Bucker & Gero Szepannek & Alicja Gosiewska & Przemyslaw Biecek, 2020. "Transparency, Auditability and eXplainability of Machine Learning Models in Credit Scoring," Papers 2009.13384, arXiv.org.
    16. Jian Lu & Raheel Ahmad & Thomas Nguyen & Jeffrey Cifello & Humza Hemani & Jiangyuan Li & Jinguo Chen & Siyi Li & Jing Wang & Achouak Achour & Joseph Chen & Meagan Colie & Ana Lustig & Christopher Dunn, 2022. "Heterogeneity and transcriptome changes of human CD8+ T cells across nine decades of life," Nature Communications, Nature, vol. 13(1), pages 1-13, December.
    17. Timo Schulte & Tillmann Wurz & Oliver Groene & Sabine Bohnet-Joschko, 2023. "Big Data Analytics to Reduce Preventable Hospitalizations—Using Real-World Data to Predict Ambulatory Care-Sensitive Conditions," IJERPH, MDPI, vol. 20(6), pages 1-16, March.
    18. Qingrong Tan & Yan Cai & Fen Luo & Dongbo Tu, 2023. "Development of a High-Accuracy and Effective Online Calibration Method in CD-CAT Based on Gini Index," Journal of Educational and Behavioral Statistics, , vol. 48(1), pages 103-141, February.
    19. Lkhagvadorj Munkhdalai & Tsendsuren Munkhdalai & Oyun-Erdene Namsrai & Jong Yun Lee & Keun Ho Ryu, 2019. "An Empirical Comparison of Machine-Learning Methods on Bank Client Credit Assessments," Sustainability, MDPI, vol. 11(3), pages 1-23, January.
    20. Fogliato Riccardo & Oliveira Natalia L. & Yurko Ronald, 2021. "TRAP: a predictive framework for the Assessment of Performance in Trail Running," Journal of Quantitative Analysis in Sports, De Gruyter, vol. 17(2), pages 129-143, June.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:csdana:v:181:y:2023:i:c:s0167947322002699. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/csda .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.