IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v9y2021i23p3137-d695645.html
   My bibliography  Save this article

XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification

Author

Listed:
  • Kevin Fauvel

    (Inria, Univ Rennes, CNRS, IRISA, 35042 Rennes, France)

  • Tao Lin

    (College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou 310058, China)

  • Véronique Masson

    (Inria, Univ Rennes, CNRS, IRISA, 35042 Rennes, France)

  • Élisa Fromont

    (Inria, Univ Rennes, CNRS, IRISA, 35042 Rennes, France)

  • Alexandre Termier

    (Inria, Univ Rennes, CNRS, IRISA, 35042 Rennes, France)

Abstract

Multivariate Time Series (MTS) classification has gained importance over the past decade with the increase in the number of temporal datasets in multiple domains. The current state-of-the-art MTS classifier is a heavyweight deep learning approach, which outperforms the second-best MTS classifier only on large datasets. Moreover, this deep learning approach cannot provide faithful explanations as it relies on post hoc model-agnostic explainability methods, which could prevent its use in numerous applications. In this paper, we present XCM, an eXplainable Convolutional neural network for MTS classification. XCM is a new compact convolutional neural network which extracts information relative to the observed variables and time directly from the input data. Thus, XCM architecture enables a good generalization ability on both large and small datasets, while allowing the full exploitation of a faithful post hoc model-specific explainability method (Gradient-weighted Class Activation Mapping) by precisely identifying the observed variables and timestamps of the input data that are important for predictions. We first show that XCM outperforms the state-of-the-art MTS classifiers on both the large and small public UEA datasets. Then, we illustrate how XCM reconciles performance and explainability on a synthetic dataset and show that XCM enables a more precise identification of the regions of the input data that are important for predictions compared to the current deep learning MTS classifier also providing faithful explainability. Finally, we present how XCM can outperform the current most accurate state-of-the-art algorithm on a real-world application while enhancing explainability by providing faithful and more informative explanations.

Suggested Citation

  • Kevin Fauvel & Tao Lin & Véronique Masson & Élisa Fromont & Alexandre Termier, 2021. "XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification," Mathematics, MDPI, vol. 9(23), pages 1-19, December.
  • Handle: RePEc:gam:jmathe:v:9:y:2021:i:23:p:3137-:d:695645
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/9/23/3137/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/9/23/3137/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Sebastian Bach & Alexander Binder & Grégoire Montavon & Frederick Klauschen & Klaus-Robert Müller & Wojciech Samek, 2015. "On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation," PLOS ONE, Public Library of Science, vol. 10(7), pages 1-46, July.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Damiano Brigo & Xiaoshan Huang & Andrea Pallavicini & Haitz Saez de Ocariz Borde, 2021. "Interpretability in deep learning for finance: a case study for the Heston model," Papers 2104.09476, arXiv.org.
    2. Parmar, Janak & Das, Pritikana & Dave, Sanjaykumar M., 2021. "A machine learning approach for modelling parking duration in urban land-use," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 572(C).
    3. Pelin Ayranci & Phung Lai & Nhathai Phan & Han Hu & Alexander Kolinowski & David Newman & Deijing Dou, 2022. "OnML: an ontology-based approach for interpretable machine learning," Journal of Combinatorial Optimization, Springer, vol. 44(1), pages 770-793, August.
    4. Sherwan Mohammed Najm & Imre Paniti, 2023. "Investigation and machine learning-based prediction of parametric effects of single point incremental forming on pillow effect and wall profile of AlMn1Mg1 aluminum alloy sheets," Journal of Intelligent Manufacturing, Springer, vol. 34(1), pages 331-367, January.
    5. Davazdahemami, Behrooz & Kalgotra, Pankush & Zolbanin, Hamed M. & Delen, Dursun, 2023. "A developer-oriented recommender model for the app store: A predictive network analytics approach," Journal of Business Research, Elsevier, vol. 158(C).
    6. S. Van Cranenburgh & S. Wang & A. Vij & F. Pereira & J. Walker, 2021. "Choice modelling in the age of machine learning -- discussion paper," Papers 2101.11948, arXiv.org, revised Nov 2021.
    7. Kunal Pattanayak & Vikram Krishnamurthy, 2021. "Rationally Inattentive Utility Maximization for Interpretable Deep Image Classification," Papers 2102.04594, arXiv.org, revised Jul 2021.
    8. Gabriel Ferrettini & Elodie Escriva & Julien Aligon & Jean-Baptiste Excoffier & Chantal Soulé-Dupuy, 2022. "Coalitional Strategies for Efficient Individual Prediction Explanation," Information Systems Frontiers, Springer, vol. 24(1), pages 49-75, February.
    9. Minyoung Lee & Joohyoung Jeon & Hongchul Lee, 2022. "Explainable AI for domain experts: a post Hoc analysis of deep learning for defect classification of TFT–LCD panels," Journal of Intelligent Manufacturing, Springer, vol. 33(6), pages 1747-1759, August.
    10. Mark Gromowski & Michael Siebers & Ute Schmid, 2020. "A process framework for inducing and explaining Datalog theories," Advances in Data Analysis and Classification, Springer;German Classification Society - Gesellschaft für Klassifikation (GfKl);Japanese Classification Society (JCS);Classification and Data Analysis Group of the Italian Statistical Society (CLADAG);International Federation of Classification Societies (IFCS), vol. 14(4), pages 821-835, December.
    11. Fallahgoul, Hasan & Franstianto, Vincentius & Lin, Xin, 2024. "Asset pricing with neural networks: Significance tests," Journal of Econometrics, Elsevier, vol. 238(1).
    12. James V. Hansen, 2021. "Coalition Feature Interpretation and Attribution in Algorithmic Trading Models," Computational Economics, Springer;Society for Computational Economics, vol. 58(3), pages 849-866, October.
    13. Lara Marie Demajo & Vince Vella & Alexiei Dingli, 2020. "Explainable AI for Interpretable Credit Scoring," Papers 2012.03749, arXiv.org.
    14. Lars Ole Hjelkrem & Petter Eilif de Lange, 2023. "Explaining Deep Learning Models for Credit Scoring with SHAP: A Case Study Using Open Banking Data," JRFM, MDPI, vol. 16(4), pages 1-19, April.
    15. Abdulrashid, Ismail & Zanjirani Farahani, Reza & Mammadov, Shamkhal & Khalafalla, Mohamed & Chiang, Wen-Chyuan, 2024. "Explainable artificial intelligence in transport Logistics: Risk analysis for road accidents," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 186(C).
    16. Yoonjae Noh & Jong-Min Kim & Soongoo Hong & Sangjin Kim, 2023. "Deep Learning Model for Multivariate High-Frequency Time-Series Data: Financial Market Index Prediction," Mathematics, MDPI, vol. 11(16), pages 1-18, August.
    17. Amini, Mostafa & Bagheri, Ali & Delen, Dursun, 2022. "Discovering injury severity risk factors in automobile crashes: A hybrid explainable AI framework for decision support," Reliability Engineering and System Safety, Elsevier, vol. 226(C).
    18. Wang, Fujin & Zhao, Zhibin & Zhai, Zhi & Shang, Zuogang & Yan, Ruqiang & Chen, Xuefeng, 2023. "Explainability-driven model improvement for SOH estimation of lithium-ion battery," Reliability Engineering and System Safety, Elsevier, vol. 232(C).
    19. André Steimers & Moritz Schneider, 2022. "Sources of Risk of AI Systems," IJERPH, MDPI, vol. 19(6), pages 1-32, March.
    20. Wei Jie Yeo & Wihan van der Heever & Rui Mao & Erik Cambria & Ranjan Satapathy & Gianmarco Mengaldo, 2023. "A Comprehensive Review on Financial Explainable AI," Papers 2309.11960, arXiv.org.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:9:y:2021:i:23:p:3137-:d:695645. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.