IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2402.11157.html
   My bibliography  Save this paper

The Value of Context: Human versus Black Box Evaluators

Author

Listed:
  • Andrei Iakovlev
  • Annie Liang

Abstract

Machine learning algorithms are now capable of performing evaluations previously conducted by human experts (e.g., medical diagnoses). How should we conceptualize the difference between evaluation by humans and by algorithms, and when should an individual prefer one over the other? We propose a framework to examine one key distinction between the two forms of evaluation: Machine learning algorithms are standardized, fixing a common set of covariates by which to assess all individuals, while human evaluators customize which covariates are acquired to each individual. Our framework defines and analyzes the advantage of this customization -- the value of context -- in environments with high-dimensional data. We show that unless the agent has precise knowledge about the joint distribution of covariates, the benefit of additional covariates generally outweighs the value of context.

Suggested Citation

  • Andrei Iakovlev & Annie Liang, 2024. "The Value of Context: Human versus Black Box Evaluators," Papers 2402.11157, arXiv.org, revised Jun 2024.
  • Handle: RePEc:arx:papers:2402.11157
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2402.11157
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Alexander Frankel, 2014. "Aligned Delegation," American Economic Review, American Economic Association, vol. 104(1), pages 66-83, January.
    2. Nikhil Agarwal & Alex Moehring & Pranav Rajpurkar & Tobias Salz, 2023. "Combining Human Expertise with Artificial Intelligence: Experimental Evidence from Radiology," NBER Working Papers 31422, National Bureau of Economic Research, Inc.
    3. Paul R. Milgrom, 1981. "Good News and Bad News: Representation Theorems and Applications," Bell Journal of Economics, The RAND Corporation, vol. 12(2), pages 380-391, Autumn.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Gregorio Curello & Ludvig Sinander, 2020. "Screening for breakthroughs," Papers 2011.10090, arXiv.org, revised Feb 2024.
    2. Ginger Zhe Jin & Andrew Kato & John A. List, 2010. "That’S News To Me! Information Revelation In Professional Certification Markets," Economic Inquiry, Western Economic Association International, vol. 48(1), pages 104-122, January.
    3. Persson, Petra, 2018. "Attention manipulation and information overload," Behavioural Public Policy, Cambridge University Press, vol. 2(1), pages 78-106, May.
    4. Burkhard Schipper & Hee Yeul Woo, 2012. "Political Awareness and Microtargeting of Voters in Electoral Competition," Working Papers 124, University of California, Davis, Department of Economics.
    5. Dessein, Wouter & Frankel, Alexander & Kartik, Navin, 2023. "Test-Optional Admissions," CEPR Discussion Papers 18090, C.E.P.R. Discussion Papers.
    6. Joshua Schwartzstein & Adi Sunderam, 2021. "Using Models to Persuade," American Economic Review, American Economic Association, vol. 111(1), pages 276-323, January.
    7. Chulyoung Kim, 2017. "An economic rationale for dismissing low-quality experts in trial," Scottish Journal of Political Economy, Scottish Economic Society, vol. 64(5), pages 445-466, November.
    8. Simon P. Anderson & John McLaren, 2012. "Media Mergers And Media Bias With Rational Consumers," Journal of the European Economic Association, European Economic Association, vol. 10(4), pages 831-859, August.
    9. Simeon Schudy & Verena Utikal, 2015. "Does imperfect data privacy stop people from collecting personal health data?," TWI Research Paper Series 98, Thurgauer Wirtschaftsinstitut, Universität Konstanz.
    10. Moraga-González, José L. & Sándor, Zsolt & Wildenbeest, Matthijs R., 2014. "Prices, Product Differentiation, And Heterogeneous Search Costs," IESE Research Papers D/1097, IESE Business School.
    11. Klumpp, Tilman & Su, Xuejuan, 2013. "Second-order statistical discrimination," Journal of Public Economics, Elsevier, vol. 97(C), pages 108-116.
    12. Pierre Fleckinger & Matthieu Glachant & Gabrielle Moineville, 2017. "Incentives for Quality in Friendly and Hostile Informational Environments," American Economic Journal: Microeconomics, American Economic Association, vol. 9(1), pages 242-274, February.
    13. Roger Bate & Ginger Zhe Jin & Aparna Mathur, 2012. "In Whom We Trust: The Role of Certification Agencies in Online Drug Markets," NBER Working Papers 17955, National Bureau of Economic Research, Inc.
    14. Eduardo Perez & Delphine Prady, 2012. "Complicating to Persuade?," Working Papers hal-03583827, HAL.
    15. Haisken-DeNew, John & Hasan, Syed & Jha, Nikhil & Sinning, Mathias, 2018. "Unawareness and selective disclosure: The effect of school quality information on property prices," Journal of Economic Behavior & Organization, Elsevier, vol. 145(C), pages 449-464.
    16. Eduardo Perez-Richet, 2014. "Interim Bayesian Persuasion: First Steps," American Economic Review, American Economic Association, vol. 104(5), pages 469-474, May.
    17. V. Bhaskar & Caroline Thomas, 2019. "The Culture of Overconfidence," American Economic Review: Insights, American Economic Association, vol. 1(1), pages 95-110, June.
    18. Vaccari, Federico, 2023. "Competition in costly talk," Journal of Economic Theory, Elsevier, vol. 213(C).
    19. Matthias Dahm & Nicolás Porteiro, 2008. "Informational lobbying under the shadow of political pressure," Social Choice and Welfare, Springer;The Society for Social Choice and Welfare, vol. 30(4), pages 531-559, May.
    20. Heyes, Anthony & Lyon, Thomas P. & Martin, Steve, 2018. "Salience games: Private politics when public attention is limited," Journal of Environmental Economics and Management, Elsevier, vol. 88(C), pages 396-410.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2402.11157. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.