IDEAS home Printed from https://ideas.repec.org/a/gam/jstats/v5y2022i2p26-457d801953.html
   My bibliography  Save this article

Opening the Black Box: Bootstrapping Sensitivity Measures in Neural Networks for Interpretable Machine Learning

Author

Listed:
  • Michele La Rocca

    (Department of Economics and Statistics, University of Salerno, 84084 Fisciano, Italy)

  • Cira Perna

    (Department of Economics and Statistics, University of Salerno, 84084 Fisciano, Italy)

Abstract

Artificial neural networks are powerful tools for data analysis, particularly in the context of highly nonlinear regression models. However, their utility is critically limited due to the lack of interpretation of the model given its black-box nature. To partially address the problem, the paper focuses on the important problem of feature selection. It proposes and discusses a statistical test procedure for selecting a set of input variables that are relevant to the model while taking into account the multiple testing nature of the problem. The approach is within the general framework of sensitivity analysis and uses the conditional expectation of functions of the partial derivatives of the output with respect to the inputs as a sensitivity measure. The proposed procedure extensively uses the bootstrap to approximate the test statistic distribution under the null while controlling the familywise error rate to correct for data snooping arising from multiple testing. In particular, a pair bootstrap scheme was implemented in order to obtain consistent results when using misspecified statistical models, a typical characteristic of neural networks. Numerical examples and a Monte Carlo simulation were carried out to verify the ability of the proposed test procedure to correctly identify the set of relevant features.

Suggested Citation

  • Michele La Rocca & Cira Perna, 2022. "Opening the Black Box: Bootstrapping Sensitivity Measures in Neural Networks for Interpretable Machine Learning," Stats, MDPI, vol. 5(2), pages 1-18, April.
  • Handle: RePEc:gam:jstats:v:5:y:2022:i:2:p:26-457:d:801953
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2571-905X/5/2/26/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2571-905X/5/2/26/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Joseph P. Romano & Michael Wolf, 2005. "Stepwise Multiple Testing as Formalized Data Snooping," Econometrica, Econometric Society, vol. 73(4), pages 1237-1282, July.
    2. La Rocca, Michele & Perna, Cira, 2005. "Variable selection in neural network regression models with dependent data: a subsampling approach," Computational Statistics & Data Analysis, Elsevier, vol. 48(2), pages 415-429, February.
    3. Joseph P. Romano & Michael Wolf, 2005. "Exact and Approximate Stepdown Methods for Multiple Hypothesis Testing," Journal of the American Statistical Association, American Statistical Association, vol. 100, pages 94-108, March.
    4. Goncalves, Silvia & Kilian, Lutz, 2004. "Bootstrapping autoregressions with conditional heteroskedasticity of unknown form," Journal of Econometrics, Elsevier, vol. 123(1), pages 89-120, November.
    5. Romano, Joseph P. & Shaikh, Azeem M. & Wolf, Michael, 2008. "Formalized Data Snooping Based On Generalized Error Rates," Econometric Theory, Cambridge University Press, vol. 24(2), pages 404-447, April.
    6. Halbert White, 2000. "A Reality Check for Data Snooping," Econometrica, Econometric Society, vol. 68(5), pages 1097-1126, September.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Zeng-Hua Lu, 2019. "Extended MinP Tests for Global and Multiple testing," Papers 1911.04696, arXiv.org, revised Aug 2024.
    2. Adriano Koshiyama & Nick Firoozye, 2019. "Avoiding Backtesting Overfitting by Covariance-Penalties: an empirical investigation of the ordinary and total least squares cases," Papers 1905.05023, arXiv.org.
    3. Christopher J. Bennett, 2009. "p-Value Adjustments for Asymptotic Control of the Generalized Familywise Error Rate," Vanderbilt University Department of Economics Working Papers 0905, Vanderbilt University Department of Economics.
    4. Alexandre Belloni & Victor Chernozhukov & Denis Chetverikov & Christian Hansen & Kengo Kato, 2018. "High-dimensional econometrics and regularized GMM," CeMMAP working papers CWP35/18, Centre for Microdata Methods and Practice, Institute for Fiscal Studies.
    5. Bajgrowicz, Pierre & Scaillet, Olivier, 2012. "Technical trading revisited: False discoveries, persistence tests, and transaction costs," Journal of Financial Economics, Elsevier, vol. 106(3), pages 473-491.
    6. Alvaro Escribano & Genaro Sucarrat, 2011. "Automated model selection in finance: General-to-speci c modelling of the mean and volatility speci cations," Working Papers 2011-09, Instituto Madrileño de Estudios Avanzados (IMDEA) Ciencias Sociales.
    7. Romano, Joseph P. & Shaikh, Azeem M. & Wolf, Michael, 2008. "Formalized Data Snooping Based On Generalized Error Rates," Econometric Theory, Cambridge University Press, vol. 24(2), pages 404-447, April.
    8. John A. List & Azeem M. Shaikh & Yang Xu, 2019. "Multiple hypothesis testing in experimental economics," Experimental Economics, Springer;Economic Science Association, vol. 22(4), pages 773-793, December.
    9. Georgios Sermpinis & Arman Hassanniakalager & Charalampos Stasinakis & Ioannis Psaradellis, 2018. "Technical Analysis and Discrete False Discovery Rate: Evidence from MSCI Indices," Papers 1811.06766, arXiv.org, revised Jun 2019.
    10. Gabriel Frahm & Tobias Wickern & Christof Wiechers, 2012. "Multiple tests for the performance of different investment strategies," AStA Advances in Statistical Analysis, Springer;German Statistical Society, vol. 96(3), pages 343-383, July.
    11. Joseph P. Romano & Azeem M. Shaikh & Michael Wolf, 2010. "Hypothesis Testing in Econometrics," Annual Review of Economics, Annual Reviews, vol. 2(1), pages 75-104, September.
    12. Kuang, P. & Schröder, M. & Wang, Q., 2014. "Illusory profitability of technical analysis in emerging foreign exchange markets," International Journal of Forecasting, Elsevier, vol. 30(2), pages 192-205.
    13. Nik Tuzov & Frederi Viens, 2011. "Mutual fund performance: false discoveries, bias, and power," Annals of Finance, Springer, vol. 7(2), pages 137-169, May.
    14. Smeekes, S., 2011. "Bootstrap sequential tests to determine the stationary units in a panel," Research Memorandum 003, Maastricht University, Maastricht Research School of Economics of Technology and Organization (METEOR).
    15. Sucarrat, Genaro, 2009. "Automated financial multi-path GETS modelling," UC3M Working papers. Economics we093620, Universidad Carlos III de Madrid. Departamento de Economía.
    16. Romano, Joseph P. & Wolf, Michael, 2016. "Efficient computation of adjusted p-values for resampling-based stepdown multiple testing," Statistics & Probability Letters, Elsevier, vol. 113(C), pages 38-40.
    17. Hassanniakalager, Arman & Sermpinis, Georgios & Stasinakis, Charalampos, 2021. "Trading the foreign exchange market with technical analysis and Bayesian Statistics," Journal of Empirical Finance, Elsevier, vol. 63(C), pages 230-251.
    18. Kuang, P. & Schröder, M. & Wang, Q., 2014. "Illusory profitability of technical analysis in emerging foreign exchange markets," International Journal of Forecasting, Elsevier, vol. 30(2), pages 192-205.
    19. Stephen A. Gorman & Frank J. Fabozzi, 2021. "The ABC’s of the alternative risk premium: academic roots," Journal of Asset Management, Palgrave Macmillan, vol. 22(6), pages 405-436, October.
    20. Genaro Sucarrat & Alvaro Escribano, 2012. "Automated Model Selection in Finance: General-to-Specific Modelling of the Mean and Volatility Specifications," Oxford Bulletin of Economics and Statistics, Department of Economics, University of Oxford, vol. 74(5), pages 716-735, October.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jstats:v:5:y:2022:i:2:p:26-457:d:801953. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.