IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v9y2021i11p1164-d559695.html
   My bibliography  Save this article

Selecting Correct Methods to Extract Fuzzy Rules from Artificial Neural Network

Author

Listed:
  • Xiao Tan

    (Lab of Intelligent Computing and Software Engineering, Zhejiang Sci-Tech University, Hangzhou 310018, China)

  • Yuan Zhou

    (School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798, Singapore)

  • Zuohua Ding

    (Lab of Intelligent Computing and Software Engineering, Zhejiang Sci-Tech University, Hangzhou 310018, China)

  • Yang Liu

    (School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798, Singapore)

Abstract

Artificial neural network (ANN) inherently cannot explain in a comprehensible form how a given decision or output is generated, which limits its extensive use. Fuzzy rules are an intuitive and reasonable representation to be used for explanation, model checking, and system integration. However, different methods may extract different rules from the same ANN. Which one can deliver good quality such that the ANN can be accurately described by the extracted fuzzy rules? In this paper, we perform an empirical study on three different rule extraction methods. The first method extracts fuzzy rules from a fuzzy neural network, while the second and third ones are originally designed to extract crisp rules, which can be transformed into fuzzy rules directly, from a well-trained ANN. In detail, in the second method, the behavior of a neuron is approximated by (continuous) Boolean functions with respect to its direct input neurons, whereas in the third method, the relationship between a neuron and its direct input neurons is described by a decision tree. We evaluate the three methods on discrete, continuous, and hybrid data sets by comparing the rules generated from sample data directly. The results show that the first method cannot generate proper fuzzy rules on the three kinds of data sets, the second one can generate accurate rules on discrete data, while the third one can generate fuzzy rules for all data sets but cannot always guarantee the accuracy, especially for data sets with poor separability. Hence, our work illustrates that, given an ANN, one should carefully select a method, sometimes even needs to design new methods for explanations.

Suggested Citation

  • Xiao Tan & Yuan Zhou & Zuohua Ding & Yang Liu, 2021. "Selecting Correct Methods to Extract Fuzzy Rules from Artificial Neural Network," Mathematics, MDPI, vol. 9(11), pages 1-22, May.
  • Handle: RePEc:gam:jmathe:v:9:y:2021:i:11:p:1164-:d:559695
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/9/11/1164/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/9/11/1164/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Kristof T. Schütt & Farhad Arbabzadah & Stefan Chmiela & Klaus R. Müller & Alexandre Tkatchenko, 2017. "Quantum-chemical insights from deep tensor neural networks," Nature Communications, Nature, vol. 8(1), pages 1-8, April.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Nikita Moshkov & Tim Becker & Kevin Yang & Peter Horvath & Vlado Dancik & Bridget K. Wagner & Paul A. Clemons & Shantanu Singh & Anne E. Carpenter & Juan C. Caicedo, 2023. "Predicting compound activity from phenotypic profiles and chemical structures," Nature Communications, Nature, vol. 14(1), pages 1-11, December.
    2. J. Thorben Frank & Oliver T. Unke & Klaus-Robert Müller & Stefan Chmiela, 2024. "A Euclidean transformer for fast and stable machine learned force fields," Nature Communications, Nature, vol. 15(1), pages 1-16, December.
    3. Xing Chen & Flavio Abreu Araujo & Mathieu Riou & Jacob Torrejon & Dafiné Ravelosona & Wang Kang & Weisheng Zhao & Julie Grollier & Damien Querlioz, 2022. "Forecasting the outcome of spintronic experiments with Neural Ordinary Differential Equations," Nature Communications, Nature, vol. 13(1), pages 1-12, December.
    4. Yuanming Bai & Leslie Vogt-Maranto & Mark E. Tuckerman & William J. Glover, 2022. "Machine learning the Hohenberg-Kohn map for molecular excited states," Nature Communications, Nature, vol. 13(1), pages 1-10, December.
    5. Alessio Fallani & Leonardo Medrano Sandonas & Alexandre Tkatchenko, 2024. "Inverse mapping of quantum properties to structures for chemical space of small organic molecules," Nature Communications, Nature, vol. 15(1), pages 1-14, December.
    6. Stephan Thaler & Julija Zavadlav, 2021. "Learning neural network potentials from experimental data via Differentiable Trajectory Reweighting," Nature Communications, Nature, vol. 12(1), pages 1-10, December.
    7. Yusong Wang & Tong Wang & Shaoning Li & Xinheng He & Mingyu Li & Zun Wang & Nanning Zheng & Bin Shao & Tie-Yan Liu, 2024. "Enhancing geometric representations for molecules with equivariant vector-scalar interactive message passing," Nature Communications, Nature, vol. 15(1), pages 1-13, December.
    8. Simon Batzner & Albert Musaelian & Lixin Sun & Mario Geiger & Jonathan P. Mailoa & Mordechai Kornbluth & Nicola Molinari & Tess E. Smidt & Boris Kozinsky, 2022. "E(3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials," Nature Communications, Nature, vol. 13(1), pages 1-11, December.
    9. Albert Musaelian & Simon Batzner & Anders Johansson & Lixin Sun & Cameron J. Owen & Mordechai Kornbluth & Boris Kozinsky, 2023. "Learning local equivariant representations for large-scale atomistic dynamics," Nature Communications, Nature, vol. 14(1), pages 1-15, December.
    10. Oliver T. Unke & Stefan Chmiela & Michael Gastegger & Kristof T. Schütt & Huziel E. Sauceda & Klaus-Robert Müller, 2021. "SpookyNet: Learning force fields with electronic degrees of freedom and nonlocal effects," Nature Communications, Nature, vol. 12(1), pages 1-14, December.
    11. Charlotte Loh & Thomas Christensen & Rumen Dangovski & Samuel Kim & Marin Soljačić, 2022. "Surrogate- and invariance-boosted contrastive learning for data-scarce applications in science," Nature Communications, Nature, vol. 13(1), pages 1-12, December.
    12. Sunghwan Choi, 2023. "Prediction of transition state structures of gas-phase chemical reactions via machine learning," Nature Communications, Nature, vol. 14(1), pages 1-11, December.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:9:y:2021:i:11:p:1164-:d:559695. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.