IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v12y2024i8p1198-d1377163.html
   My bibliography  Save this article

DAGOR: Learning DAGs via Topological Sorts and QR Factorization

Author

Listed:
  • Hao Zuo

    (National Key Laboratory of Information Systems Engineering, National University of Defense Technology, Changsha 410073, China
    These authors contributed equally to this work.)

  • Jinshen Jiang

    (National Key Laboratory of Information Systems Engineering, National University of Defense Technology, Changsha 410073, China
    These authors contributed equally to this work.)

  • Yun Zhou

    (National Key Laboratory of Information Systems Engineering, National University of Defense Technology, Changsha 410073, China)

Abstract

Recently, the task of acquiring causal directed acyclic graphs (DAGs) from empirical data has been modeled as an iterative process within the framework of continuous optimization with a differentiable acyclicity characterization. However, learning DAGs from data is an NP-hard problem since the DAG space increases super-exponentially with the number of variables. In this work, we introduce the graph topological sorts in solving the continuous optimization problem, which is substantially smaller than the DAG space and beneficial in avoiding local optima. Moreover, the topological sorts space does not require consideration of acyclicity, which can significantly reduce the computational cost. To further deal with the inherent asymmetries of DAGs, we investigate the acyclicity characterization and propose a new DAGs learning optimization strategy based on QR factorization, named DAGOR. First, using the matrix congruent transformation, the adjacency matrix of the DAG is transformed into an upper triangular matrix with a topological sort. Next, using the QR factorization as a basis, we construct a least-square penalty function as constraints for optimization in the graph autoencoder framework. Numerical experiments are performed to further validate our theoretical results and demonstrate the competitive performance of our method.

Suggested Citation

  • Hao Zuo & Jinshen Jiang & Yun Zhou, 2024. "DAGOR: Learning DAGs via Topological Sorts and QR Factorization," Mathematics, MDPI, vol. 12(8), pages 1-16, April.
  • Handle: RePEc:gam:jmathe:v:12:y:2024:i:8:p:1198-:d:1377163
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/12/8/1198/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/12/8/1198/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. David M. Blei & Alp Kucukelbir & Jon D. McAuliffe, 2017. "Variational Inference: A Review for Statisticians," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 112(518), pages 859-877, April.
    2. Su, Zhi & Liu, Peng & Fang, Tong, 2022. "Uncertainty matters in US financial information spillovers: Evidence from a directed acyclic graph approach," The Quarterly Review of Economics and Finance, Elsevier, vol. 84(C), pages 229-242.
    3. Abramson, Bruce & Brown, John & Edwards, Ward & Murphy, Allan & Winkler, Robert L., 1996. "Hailfinder: A Bayesian system for forecasting severe weather," International Journal of Forecasting, Elsevier, vol. 12(1), pages 57-71, March.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Shen Liu & Hongyan Liu, 2021. "Tagging Items Automatically Based on Both Content Information and Browsing Behaviors," INFORMS Journal on Computing, INFORMS, vol. 33(3), pages 882-897, July.
    2. Luo, Nanyu & Ji, Feng & Han, Yuting & He, Jinbo & Zhang, Xiaoya, 2024. "Fitting item response theory models using deep learning computational frameworks," OSF Preprints tjxab, Center for Open Science.
    3. Liu, Jie & Ye, Zifeng & Chen, Kun & Zhang, Panpan, 2024. "Variational Bayesian inference for bipartite mixed-membership stochastic block model with applications to collaborative filtering," Computational Statistics & Data Analysis, Elsevier, vol. 189(C).
    4. Djohan Bonnet & Tifenn Hirtzlin & Atreya Majumdar & Thomas Dalgaty & Eduardo Esmanhotto & Valentina Meli & Niccolo Castellani & Simon Martin & Jean-François Nodin & Guillaume Bourgeois & Jean-Michel P, 2023. "Bringing uncertainty quantification to the extreme-edge with memristor-based Bayesian neural networks," Nature Communications, Nature, vol. 14(1), pages 1-13, December.
    5. Seokhyun Chung & Raed Al Kontar & Zhenke Wu, 2022. "Weakly Supervised Multi-output Regression via Correlated Gaussian Processes," INFORMS Joural on Data Science, INFORMS, vol. 1(2), pages 115-137, October.
    6. Gary Koop & Dimitris Korobilis, 2023. "Bayesian Dynamic Variable Selection In High Dimensions," International Economic Review, Department of Economics, University of Pennsylvania and Osaka University Institute of Social and Economic Research Association, vol. 64(3), pages 1047-1074, August.
    7. Ziqi Zhang & Xinye Zhao & Mehak Bindra & Peng Qiu & Xiuwei Zhang, 2024. "scDisInFact: disentangled learning for integration and prediction of multi-batch multi-condition single-cell RNA-sequencing data," Nature Communications, Nature, vol. 15(1), pages 1-16, December.
    8. Dimitris Korobilis & Davide Pettenuzzo, 2020. "Machine Learning Econometrics: Bayesian algorithms and methods," Working Papers 2020_09, Business School - Economics, University of Glasgow.
    9. Jan Prüser & Florian Huber, 2024. "Nonlinearities in macroeconomic tail risk through the lens of big data quantile regressions," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 39(2), pages 269-291, March.
    10. Bansal, Prateek & Krueger, Rico & Graham, Daniel J., 2021. "Fast Bayesian estimation of spatial count data models," Computational Statistics & Data Analysis, Elsevier, vol. 157(C).
    11. Korobilis, Dimitris & Koop, Gary, 2018. "Variational Bayes inference in high-dimensional time-varying parameter models," Essex Finance Centre Working Papers 22665, University of Essex, Essex Business School.
    12. Etienne Côme & Nicolas Jouvin & Pierre Latouche & Charles Bouveyron, 2021. "Hierarchical clustering with discrete latent variable models and the integrated classification likelihood," Advances in Data Analysis and Classification, Springer;German Classification Society - Gesellschaft für Klassifikation (GfKl);Japanese Classification Society (JCS);Classification and Data Analysis Group of the Italian Statistical Society (CLADAG);International Federation of Classification Societies (IFCS), vol. 15(4), pages 957-986, December.
    13. Alex Burnap & John R. Hauser & Artem Timoshenko, 2023. "Product Aesthetic Design: A Machine Learning Augmentation," Marketing Science, INFORMS, vol. 42(6), pages 1029-1056, November.
    14. Yuan Fang & Dimitris Karlis & Sanjeena Subedi, 2022. "Infinite Mixtures of Multivariate Normal-Inverse Gaussian Distributions for Clustering of Skewed Data," Journal of Classification, Springer;The Classification Society, vol. 39(3), pages 510-552, November.
    15. Stéphane Bonhomme, 2021. "Selection on Welfare Gains: Experimental Evidence from Electricity Plan Choice," Working Papers 2021-15, Becker Friedman Institute for Research In Economics.
    16. Junming Yin & Jerry Luo & Susan A. Brown, 2021. "Learning from Crowdsourced Multi-labeling: A Variational Bayesian Approach," Information Systems Research, INFORMS, vol. 32(3), pages 752-773, September.
    17. Jeong, Kuhwan & Chae, Minwoo & Kim, Yongdai, 2023. "Online learning for the Dirichlet process mixture model via weakly conjugate approximation," Computational Statistics & Data Analysis, Elsevier, vol. 179(C).
    18. Daziano, Ricardo A., 2022. "Willingness to delay charging of electric vehicles," Research in Transportation Economics, Elsevier, vol. 94(C).
    19. Andreas Rehs, 2020. "A structural topic model approach to scientific reorientation of economics and chemistry after German reunification," Scientometrics, Springer;Akadémiai Kiadó, vol. 125(2), pages 1229-1251, November.
    20. Kohn, Robert & Nguyen, Nghia & Nott, David & Tran, Minh-Ngoc, 2017. "Random Effects Models with Deep Neural Network Basis Functions: Methodology and Computation," Working Papers 2123/17877, University of Sydney Business School, Discipline of Business Analytics.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:12:y:2024:i:8:p:1198-:d:1377163. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.