IDEAS home Printed from https://ideas.repec.org/a/wly/empleg/v20y2023i2p377-408.html
   My bibliography  Save this article

How accurate are rebuttable presumptions of pretrial dangerousness?: A natural experiment from New Mexico

Author

Listed:
  • Cristopher Moore
  • Elise Ferguson
  • Paul Guerin

Abstract

In New Mexico and many other jurisdictions, judges may detain defendants pretrial if the prosecutor proves, through clear and convincing evidence, that releasing them would pose a danger to the public. However, some policymakers argue that certain classes of defendants should have a “rebuttable presumption” of dangerousness, shifting the burden of proof to the defense. Using data on over 15,000 felony defendants who were released pretrial in a 4‐year period in New Mexico, we measure how many of them would have been detained by various presumptions, and what fraction of these defendants in fact posed a danger in the sense that they were charged with a new crime during pretrial supervision. We consider presumptions based on the current charge, past convictions, past failures to appear, past violations of conditions of release, and combinations of these drawn from recent legislative proposals. We find that for all these criteria, at most 8% of the defendants they identify are charged pretrial with a new violent crime (felony or misdemeanor), and at most 5% are charged with a new violent felony. The false‐positive rate, that is, the fraction of defendants these policies would detain who are not charged with any new crime pretrial, ranges from 71% to 90%. The broadest legislative proposals, such as detaining all defendants charged with a violent felony, are little more accurate than detaining a random sample of defendants released under the current system, and would jail 20 or more people to prevent a single violent felony. We also consider detention recommendations based on risk scores from the Arnold Public Safety Assessment (PSA). Among released defendants with the highest risk score and the “violence flag,” 7% are charged with a new violent felony and 71% are false positives. We conclude that these criteria for rebuttable presumptions do not accurately target dangerous defendants: they cast wide nets and recommend detention for many pretrial defendants who do not pose a danger to the public.

Suggested Citation

  • Cristopher Moore & Elise Ferguson & Paul Guerin, 2023. "How accurate are rebuttable presumptions of pretrial dangerousness?: A natural experiment from New Mexico," Journal of Empirical Legal Studies, John Wiley & Sons, vol. 20(2), pages 377-408, June.
  • Handle: RePEc:wly:empleg:v:20:y:2023:i:2:p:377-408
    DOI: 10.1111/jels.12351
    as

    Download full text from publisher

    File URL: https://doi.org/10.1111/jels.12351
    Download Restriction: no

    File URL: https://libkey.io/10.1111/jels.12351?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. David Arnold & Will Dobbie & Crystal S Yang, 2018. "Racial Bias in Bail Decisions," The Quarterly Journal of Economics, President and Fellows of Harvard College, vol. 133(4), pages 1885-1932.
    2. Frank McIntyre & Shima Baradaran, 2013. "Race, Prediction, and Pretrial Detention," Journal of Empirical Legal Studies, John Wiley & Sons, vol. 10(4), pages 741-770, December.
    3. Richard A. Berk & Susan B. Sorenson & Geoffrey Barnes, 2016. "Forecasting Domestic Violence: A Machine Learning Approach to Help Inform Arraignment Decisions," Journal of Empirical Legal Studies, John Wiley & Sons, vol. 13(1), pages 94-115, March.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. David Arnold & Will Dobbie & Peter Hull, 2022. "Measuring Racial Discrimination in Bail Decisions," American Economic Review, American Economic Association, vol. 112(9), pages 2992-3038, September.
    2. Stevenson, Megan T. & Doleac, Jennifer, 2019. "Algorithmic Risk Assessment in the Hands of Humans," IZA Discussion Papers 12853, Institute of Labor Economics (IZA).
    3. LaVoice, Jessica & Vamossy, Domonkos F., 2024. "Racial disparities in debt collection," Journal of Banking & Finance, Elsevier, vol. 164(C).
    4. Jens Ludwig & Sendhil Mullainathan, 2021. "Fragile Algorithms and Fallible Decision-Makers: Lessons from the Justice System," Journal of Economic Perspectives, American Economic Association, vol. 35(4), pages 71-96, Fall.
    5. Araujo, Aloisio & Ferreira, Rafael & Lagaras, Spyridon & Moraes, Flavio & Ponticelli, Jacopo & Tsoutsoura, Margarita, 2023. "The labor effects of judicial bias in bankruptcy," Journal of Financial Economics, Elsevier, vol. 150(2).
    6. Daniel Martin & Philip Marx, 2022. "A Robust Test of Prejudice for Discrimination Experiments," Management Science, INFORMS, vol. 68(6), pages 4527-4536, June.
    7. Ivan A Canay & Magne Mogstad & Jack Mount, 2024. "On the Use of Outcome Tests for Detecting Bias in Decision Making," The Review of Economic Studies, Review of Economic Studies Ltd, vol. 91(4), pages 2135-2167.
    8. Jeffrey Grogger & Sean Gupta & Ria Ivandic & Tom Kirchmaier, 2021. "Comparing Conventional and Machine‐Learning Approaches to Risk Assessment in Domestic Abuse Cases," Journal of Empirical Legal Studies, John Wiley & Sons, vol. 18(1), pages 90-130, March.
    9. Arcidiacono, Peter & Kinsler, Josh & Ransom, Tyler, 2022. "Asian American Discrimination in Harvard Admissions," European Economic Review, Elsevier, vol. 144(C).
    10. Luis Sarmiento & Adam Nowakowski, 2023. "Court Decisions and Air Pollution: Evidence from Ten Million Penal Cases in India," Environmental & Resource Economics, Springer;European Association of Environmental and Resource Economists, vol. 86(3), pages 605-644, November.
    11. Aguiar, Luis & Waldfogel, Joel & Waldfogel, Sarah, 2021. "Playlisting favorites: Measuring platform bias in the music industry," International Journal of Industrial Organization, Elsevier, vol. 78(C).
    12. Makofske, Matthew, 2020. "Pretextual Traffic Stops and Racial Disparities in their Use," MPRA Paper 100792, University Library of Munich, Germany.
    13. Ramos Maqueda,Manuel & Chen,Daniel Li, 2021. "The Role of Justice in Development : The Data Revolution," Policy Research Working Paper Series 9720, The World Bank.
    14. J. Aislinn Bohren & Alex Imas & Michael Rosenberg, 2019. "The Dynamics of Discrimination: Theory and Evidence," American Economic Review, American Economic Association, vol. 109(10), pages 3395-3436, October.
    15. Nikoloz Kudashvili & Philipp Lergetporer, 2019. "Do Minorities Misrepresent Their Ethnicity to Avoid Discrimination?," CESifo Working Paper Series 7861, CESifo.
    16. Nicolás Grau & Damián Vergara, "undated". "A Simple Test for Prejudice in Decision Processes: The Prediction-Based Outcome Test," Working Papers wp493, University of Chile, Department of Economics.
    17. Bartalotti, Otávio & Kédagni, Désiré & Possebom, Vitor, 2023. "Identifying marginal treatment effects in the presence of sample selection," Journal of Econometrics, Elsevier, vol. 234(2), pages 565-584.
    18. Bharti, Nitin Kumar & Roy, Sutanuka, 2023. "The early origins of judicial stringency in bail decisions: Evidence from early childhood exposure to Hindu-Muslim riots in India," Journal of Public Economics, Elsevier, vol. 221(C).
    19. Zhewen Pan & Zhengxin Wang & Junsen Zhang & Yahong Zhou, 2024. "Marginal treatment effects in the absence of instrumental variables," Papers 2401.17595, arXiv.org, revised Aug 2024.
    20. Ash, Elliott & Chen, Daniel L. & Ornaghi, Arianna, 2020. "Stereotypes in High-Stakes Decisions : Evidence from U.S. Circuit Courts," The Warwick Economics Research Paper Series (TWERPS) 1256, University of Warwick, Department of Economics.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:wly:empleg:v:20:y:2023:i:2:p:377-408. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Wiley Content Delivery (email available below). General contact details of provider: https://doi.org/10.1111/(ISSN)1740-1461 .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.