IDEAS home Printed from https://ideas.repec.org/a/wly/empleg/v4y2007i2p305-329.html
   My bibliography  Save this article

Estimating the Accuracy of Jury Verdicts

Author

Listed:
  • Bruce D. Spencer

Abstract

Average accuracy of jury verdicts for a set of cases can be studied empirically and systematically even when the correct verdict cannot be known. The key is to obtain a second rating of the verdict, for example, the judge's, as in the recent study of criminal cases in the United States by the National Center for State Courts (NCSC). That study, like the famous Kalven‐Zeisel study, showed only modest judge‐jury agreement. Simple estimates of jury accuracy can be developed from the judge‐jury agreement rate; the judge's verdict is not taken as the gold standard. Although the estimates of accuracy are subject to error, under plausible conditions they tend to overestimate the average accuracy of jury verdicts. The jury verdict was estimated to be accurate in no more than 87 percent of the NCSC cases (which, however, should not be regarded as a representative sample with respect to jury accuracy). More refined estimates, including false conviction and false acquittal rates, are developed with models using stronger assumptions. For example, the conditional probability that the jury incorrectly convicts given that the defendant truly was not guilty (a “Type I error”) was estimated at 0.25, with an estimated standard error (s.e.) of 0.07, the conditional probability that a jury incorrectly acquits given that the defendant truly was guilty (“Type II error”) was estimated at 0.14 (s.e. 0.03), and the difference was estimated at 0.12 (s.e. 0.08). The estimated number of defendants in the NCSC cases who truly are not guilty but are convicted does seem to be smaller than the number who truly are guilty but are acquitted. The conditional probability of a wrongful conviction, given that the defendant was convicted, is estimated at 0.10 (s.e. 0.03).

Suggested Citation

  • Bruce D. Spencer, 2007. "Estimating the Accuracy of Jury Verdicts," Journal of Empirical Legal Studies, John Wiley & Sons, vol. 4(2), pages 305-329, July.
  • Handle: RePEc:wly:empleg:v:4:y:2007:i:2:p:305-329
    DOI: 10.1111/j.1740-1461.2007.00090.x
    as

    Download full text from publisher

    File URL: https://doi.org/10.1111/j.1740-1461.2007.00090.x
    Download Restriction: no

    File URL: https://libkey.io/10.1111/j.1740-1461.2007.00090.x?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Obidzinski, Marie & Oytana, Yves, 2019. "Identity errors and the standard of proof," International Review of Law and Economics, Elsevier, vol. 57(C), pages 73-80.
    2. Kanaya, Shin & Taylor, Luke, 2020. "Type I and Type II Error Probabilities in the Courtroom," MPRA Paper 100217, University Library of Munich, Germany.
    3. Geoffrey Jones & Wesley O. Johnson & Timothy E. Hanson & Ronald Christensen, 2010. "Identifiability of Models for Multiple Diagnostic Testing in the Absence of a Gold Standard," Biometrics, The International Biometric Society, vol. 66(3), pages 855-863, September.
    4. Theodore Eisenberg & Michael Heise, 2009. "Plaintiphobia in State Courts? An Empirical Study of State Court Trials on Appeal," The Journal of Legal Studies, University of Chicago Press, vol. 38(1), pages 121-155, January.
    5. Sangjoon Kim & Jaihyun Park & Kwangbai Park & Jin‐Sup Eom, 2013. "Judge‐Jury Agreement in Criminal Cases: The First Three Years of the Korean Jury System," Journal of Empirical Legal Studies, John Wiley & Sons, vol. 10(1), pages 35-53, March.
    6. Matteo Rizzolli, 2016. "Adjudication: Type-I and Type-II Errors," CERBE Working Papers wpC15, CERBE Center for Relationship Banking and Economics.
    7. Christoph Engel & Andreas Glöckner, 2008. "Can We Trust Intuitive Jurors? An Experimental Analysis," Discussion Paper Series of the Max Planck Institute for Research on Collective Goods 2008_36, Max Planck Institute for Research on Collective Goods.
    8. Bruce D. Spencer, 2012. "When Do Latent Class Models Overstate Accuracy for Diagnostic and Other Classifiers in the Absence of a Gold Standard?," Biometrics, The International Biometric Society, vol. 68(2), pages 559-566, June.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:wly:empleg:v:4:y:2007:i:2:p:305-329. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Wiley Content Delivery (email available below). General contact details of provider: https://doi.org/10.1111/(ISSN)1740-1461 .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.