IDEAS home Printed from https://ideas.repec.org/p/arx/papers/1910.07022.html
   My bibliography  Save this paper

Measuring the Completeness of Theories

Author

Listed:
  • Drew Fudenberg
  • Jon Kleinberg
  • Annie Liang
  • Sendhil Mullainathan

Abstract

We use machine learning to provide a tractable measure of the amount of predictable variation in the data that a theory captures, which we call its "completeness." We apply this measure to three problems: assigning certain equivalents to lotteries, initial play in games, and human generation of random sequences. We discover considerable variation in the completeness of existing models, which sheds light on whether to focus on developing better models with the same features or instead to look for new features that will improve predictions. We also illustrate how and why completeness varies with the experiments considered, which highlights the role played in choosing which experiments to run.

Suggested Citation

  • Drew Fudenberg & Jon Kleinberg & Annie Liang & Sendhil Mullainathan, 2019. "Measuring the Completeness of Theories," Papers 1910.07022, arXiv.org.
  • Handle: RePEc:arx:papers:1910.07022
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/1910.07022
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Stahl, Dale II & Wilson, Paul W., 1994. "Experimental evidence on players' models of other players," Journal of Economic Behavior & Organization, Elsevier, vol. 25(3), pages 309-327, December.
    2. Ido Erev & Alvin Roth & Robert Slonim & Greg Barron, 2007. "Learning and equilibrium as useful approximations: Accuracy of prediction on randomly selected constant sum games," Economic Theory, Springer;Society for the Advancement of Economic Theory (SAET), vol. 33(1), pages 29-51, October.
    3. Barberis, Nicholas & Shleifer, Andrei & Vishny, Robert, 1998. "A model of investor sentiment," Journal of Financial Economics, Elsevier, vol. 49(3), pages 307-343, September.
    4. Adrian Bruhin & Helga Fehr-Duda & Thomas Epper, 2010. "Risk and Rationality: Uncovering Heterogeneity in Probability Distortion," Econometrica, Econometric Society, vol. 78(4), pages 1375-1412, July.
    5. Rabin, Matthew, 2000. "Risk Aversion and Expected-Utility Theory: A Calibration Theorem," Department of Economics, Working Paper Series qt731230f8, Department of Economics, Institute for Business and Economic Research, UC Berkeley.
    6. Drew Fudenberg & Annie Liang, 2019. "Predicting and Understanding Initial Play," American Economic Review, American Economic Association, vol. 109(12), pages 4112-4141, December.
    7. Gneiting, Tilmann & Raftery, Adrian E., 2007. "Strictly Proper Scoring Rules, Prediction, and Estimation," Journal of the American Statistical Association, American Statistical Association, vol. 102, pages 359-378, March.
    8. Peysakhovich, Alexander & Naecker, Jeffrey, 2017. "Using methods from machine learning to evaluate behavioral models of choice under risk and ambiguity," Journal of Economic Behavior & Organization, Elsevier, vol. 133(C), pages 373-384.
    9. Daniel Chen & Tobias J. Moskowitz & Kelly Shue, 2016. "Decision-Making under the Gambler's Fallacy: Evidence from Asylum Judges, Loan Officers, and Baseball Umpires," NBER Working Papers 22026, National Bureau of Economic Research, Inc.
    10. Jon Kleinberg & Himabindu Lakkaraju & Jure Leskovec & Jens Ludwig & Sendhil Mullainathan, 2018. "Human Decisions and Machine Predictions," The Quarterly Journal of Economics, President and Fellows of Harvard College, vol. 133(1), pages 237-293.
    11. Matthew Rabin, 2000. "Risk Aversion and Expected-Utility Theory: A Calibration Theorem," Econometrica, Econometric Society, vol. 68(5), pages 1281-1292, September.
    12. Colin F. Camerer & Teck-Hua Ho & Juin-Kuan Chong, 2004. "A Cognitive Hierarchy Model of Games," The Quarterly Journal of Economics, President and Fellows of Harvard College, vol. 119(3), pages 861-898.
    13. Daniel L. Chen & Tobias J. Moskowitz & Kelly Shue, 2016. "Decision Making Under the Gambler’s Fallacy: Evidence from Asylum Judges, Loan Officers, and Baseball Umpires," The Quarterly Journal of Economics, President and Fellows of Harvard College, vol. 131(3), pages 1181-1242.
    14. Nagel, Rosemarie, 1995. "Unraveling in Guessing Games: An Experimental Study," American Economic Review, American Economic Association, vol. 85(5), pages 1313-1326, December.
    15. Vincent P. Crawford & Miguel A. Costa-Gomes & Nagore Iriberri, 2013. "Structural Models of Nonequilibrium Strategic Thinking: Theory, Evidence, and Applications," Journal of Economic Literature, American Economic Association, vol. 51(1), pages 5-62, March.
    16. Dimitris Batzilis & Sonia Jaffe & Steven Levitt & John A. List & Jeffrey Picel, 2019. "Behavior in Strategic Settings: Evidence from a Million Rock-Paper-Scissors Games," Games, MDPI, vol. 10(2), pages 1-34, April.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Nicholas C. Barberis & Lawrence J. Jin & Baolian Wang, 2020. "Prospect Theory and Stock Market Anomalies," NBER Working Papers 27155, National Bureau of Economic Research, Inc.
    2. Hoang, Daniel & Wiegratz, Kevin, 2022. "Machine learning methods in finance: Recent applications and prospects," Working Paper Series in Economics 158, Karlsruhe Institute of Technology (KIT), Department of Economics and Management.
    3. Drew Fudenberg & Wayne Gao & Annie Liang, 2020. "How Flexible is that Functional Form? Quantifying the Restrictiveness of Theories," Papers 2007.09213, arXiv.org, revised Aug 2023.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Itzhak Rasooly, 2022. "Going...going...wrong: a test of the level-k (and cognitive hierarchy) models of bidding behaviour," Economics Series Working Papers 959, University of Oxford, Department of Economics.
    2. Itzhak Rasooly, 2021. "Going... going... wrong: a test of the level-k (and cognitive hierarchy) models of bidding behaviour," Papers 2111.05686, arXiv.org.
    3. Drew Fudenberg & Wayne Gao & Annie Liang, 2020. "How Flexible is that Functional Form? Quantifying the Restrictiveness of Theories," Papers 2007.09213, arXiv.org, revised Aug 2023.
    4. Jon Kleinberg & Annie Liang & Sendhil Mullainathan, 2017. "The Theory is Predictive, but is it Complete? An Application to Human Perception of Randomness," PIER Working Paper Archive 18-010, Penn Institute for Economic Research, Department of Economics, University of Pennsylvania, revised 09 Aug 2017.
    5. Daniel J. Benjamin, 2018. "Errors in Probabilistic Reasoning and Judgment Biases," NBER Working Papers 25200, National Bureau of Economic Research, Inc.
    6. Jian-Qiao Zhu & Joshua C. Peterson & Benjamin Enke & Thomas L. Griffiths, 2024. "Capturing the Complexity of Human Strategic Decision-Making with Machine Learning," Papers 2408.07865, arXiv.org.
    7. Jian-Qiao Zhu & Joshua C. Peterson & Benjamin Enke & Thomas L. Griffiths, 2024. "Capturing the Complexity of Human Strategic Decision-Making with Machine Learning," CESifo Working Paper Series 11296, CESifo.
    8. HERINGS, P. Jean-Jacques & MAULEON, Ana & VANNETELBOSCH, Vincent, 2014. "Stability of networks under level-K farsightedness," LIDAM Discussion Papers CORE 2014032, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE).
    9. Choo, Lawrence C.Y & Kaplan, Todd R., 2014. "Explaining Behavior in the "11-20" Game," MPRA Paper 52808, University Library of Munich, Germany.
    10. Kyle Hyndman & Antoine Terracol & Jonathan Vaksmann, 2022. "Beliefs and (in)stability in normal-form games," Experimental Economics, Springer;Economic Science Association, vol. 25(4), pages 1146-1172, September.
    11. Nagel, Rosemarie & Bühren, Christoph & Frank, Björn, 2017. "Inspired and inspiring: Hervé Moulin and the discovery of the beauty contest game," Mathematical Social Sciences, Elsevier, vol. 90(C), pages 191-207.
    12. Stefano Galavotti & Luigi Moretti & Paola Valbonesi, 2018. "Sophisticated Bidders in Beauty-Contest Auctions," American Economic Journal: Microeconomics, American Economic Association, vol. 10(4), pages 1-26, November.
    13. Polonio, Luca & Coricelli, Giorgio, 2019. "Testing the level of consistency between choices and beliefs in games using eye-tracking," Games and Economic Behavior, Elsevier, vol. 113(C), pages 566-586.
    14. Lensberg, Terje & Schenk-Hoppé, Klaus Reiner, 2021. "Cold play: Learning across bimatrix games," Journal of Economic Behavior & Organization, Elsevier, vol. 185(C), pages 419-441.
    15. Ernesto Dal Bó & Pedro Dal Bó & Erik Eyster, 2018. "The Demand for Bad Policy when Voters Underappreciate Equilibrium Effects," The Review of Economic Studies, Review of Economic Studies Ltd, vol. 85(2), pages 964-998.
    16. Kota Murayama, 2020. "Robust predictions under finite depth of reasoning," The Japanese Economic Review, Springer, vol. 71(1), pages 59-84, January.
    17. Burnham, Terence C. & Cesarini, David & Wallace, Björn & Johannesson, Magnus & Lichtenstein, Paul, 2007. "Billiards and Brains: Cognitive Ability and Behavior in a p-Beauty Contest," SSE/EFI Working Paper Series in Economics and Finance 684, Stockholm School of Economics.
    18. Berger, Ulrich & De Silva, Hannelore & Fellner-Röhling, Gerlinde, 2016. "Cognitive hierarchies in the minimizer game," Journal of Economic Behavior & Organization, Elsevier, vol. 130(C), pages 337-348.
    19. Jacob K Goeree & Bernardo Garcia-Pola, 2023. "S Equilibrium: A Synthesis of (Behavioral) Game Theory," Papers 2307.06309, arXiv.org.
    20. Marco Faillo & Alessandra Smerilli & Robert Sugden, 2016. "Can a single theory explain coordination? An experiment on alternative modes of reasoning and the conditions under which they are used," Working Paper series, University of East Anglia, Centre for Behavioural and Experimental Social Science (CBESS) 16-01, School of Economics, University of East Anglia, Norwich, UK..

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:1910.07022. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.