IDEAS home Printed from https://ideas.repec.org/a/sae/medema/v39y2019i2p130-136.html
   My bibliography  Save this article

Cancer Screening Markers: A Simple Strategy to Substantially Reduce the Sample Size for Validation

Author

Listed:
  • Stuart G. Baker

Abstract

Background . Studies to validate a cancer prediction model based on cancer screening markers collected in stored specimens from asymptomatic persons typically require large specimen collection sample sizes. A standard sample size calculation targets a true-positive rate (TPR) of 0.8 with a 2.5% lower bound of 0.7 at a false-positive rate (FPR) of 0.01 with a 5% upper bound of 0.03. If the probability of developing cancer during the study is P = 0.01, the specimen collection sample size based on the standard calculation is 7600. Methods . The strategy to reduce the specimen collection sample size is to decrease both the lower bound of TPR and the upper bound of FPR while keeping a positive lower bound on the anticipated clinical utility. Results . The new sample size calculation targets a TPR of 0.4 with a 2.5% lower bound of 0.10 and an FPR of 0.0 with a 5% upper bound of 0.005. With P = 0.01, the specimen collection sample size based on the new calculation is 1800 instead of 7600. Limitations . The new sample size calculation requires a minimum benefit/cost ratio (number of false positives traded for a true positive). With P = 0.01, the minimum cost-benefit ratio is 5, which is plausible in many studies. Conclusion . In validation studies for cancer screening markers, the strategy can substantially reduce the specimen collection sample size, substantially reducing costs and making some otherwise infeasible studies now feasible.

Suggested Citation

  • Stuart G. Baker, 2019. "Cancer Screening Markers: A Simple Strategy to Substantially Reduce the Sample Size for Validation," Medical Decision Making, , vol. 39(2), pages 130-136, February.
  • Handle: RePEc:sae:medema:v:39:y:2019:i:2:p:130-136
    DOI: 10.1177/0272989X18819792
    as

    Download full text from publisher

    File URL: https://journals.sagepub.com/doi/10.1177/0272989X18819792
    Download Restriction: no

    File URL: https://libkey.io/10.1177/0272989X18819792?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Baker, Stuart G. & Kramer, Barnett S., 2007. "Peirce, Youden, and Receiver Operating Characteristic Curves," The American Statistician, American Statistical Association, vol. 61, pages 343-346, November.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Bonato, Matteo & Demirer, Riza & Gupta, Rangan & Pierdzioch, Christian, 2018. "Gold futures returns and realized moments: A forecasting experiment using a quantile-boosting approach," Resources Policy, Elsevier, vol. 57(C), pages 196-212.
    2. Döpke, Jörg & Fritsche, Ulrich & Pierdzioch, Christian, 2017. "Predicting recessions with boosted regression trees," International Journal of Forecasting, Elsevier, vol. 33(4), pages 745-759.
    3. Pierdzioch, Christian & Rülke, Jan-Christoph, 2015. "On the directional accuracy of forecasts of emerging market exchange rates," International Review of Economics & Finance, Elsevier, vol. 38(C), pages 369-376.
    4. Travis Berge & Òscar Jordà, 2013. "A chronology of turning points in economic activity: Spain, 1850–2011," SERIEs: Journal of the Spanish Economic Association, Springer;Spanish Economic Association, vol. 4(1), pages 1-34, March.
    5. Drehmann, Mathias & Juselius, Mikael, 2014. "Evaluating early warning indicators of banking crises: Satisfying policy requirements," International Journal of Forecasting, Elsevier, vol. 30(3), pages 759-780.
    6. Yusuf Yıldırım & Anirban Sanyal, 2022. "Evaluating the Effectiveness of Early Warning Indicators: An Application of Receiver Operating Characteristic Curve Approach to Panel Data," Scientific Annals of Economics and Business (continues Analele Stiintifice), Alexandru Ioan Cuza University, Faculty of Economics and Business Administration, vol. 69(4), pages 557-597, December.
    7. Robin Greenwood & Samuel G. Hanson & Andrei Shleifer & Jakob Ahm Sørensen, 2022. "Predictable Financial Crises," Journal of Finance, American Finance Association, vol. 77(2), pages 863-921, April.
    8. Tim Meyer, 2019. "On the Directional Accuracy of United States Housing Starts Forecasts: Evidence from Survey Data," The Journal of Real Estate Finance and Economics, Springer, vol. 58(3), pages 457-488, April.
    9. Travis J. Berge & Shu-Chun Chen & Hsieh Fushing & Òscar Jordà, 2010. "A chronology of international business cycles through non-parametric decoding," Research Working Paper RWP 11-13, Federal Reserve Bank of Kansas City.
    10. Ben Van Calster & Andrew J. Vickers & Michael J. Pencina & Stuart G. Baker & Dirk Timmerman & Ewout W. Steyerberg, 2013. "Evaluation of Markers and Risk Prediction Models," Medical Decision Making, , vol. 33(4), pages 490-501, May.
    11. Qing Lu & Nancy Obuchowski & Sungho Won & Xiaofeng Zhu & Robert C. Elston, 2010. "Using the Optimal Robust Receiver Operating Characteristic (ROC) Curve for Predictive Genetic Tests," Biometrics, The International Biometric Society, vol. 66(2), pages 586-593, June.
    12. Scott Brave & R. Andrew Butters, 2014. "Nowcasting Using the Chicago Fed National Activity Index," Economic Perspectives, Federal Reserve Bank of Chicago, issue Q I, pages 19-37.
    13. Pierdzioch Christian & Gupta Rangan, 2020. "Uncertainty and Forecasts of U.S. Recessions," Studies in Nonlinear Dynamics & Econometrics, De Gruyter, vol. 24(4), pages 1-20, September.
    14. Scott Brave & R. Andrew Butters, 2010. "Gathering insights on the forest from the trees: a new metric for financial conditions," Working Paper Series WP-2010-07, Federal Reserve Bank of Chicago.
    15. Stuart Baker & Jian-Lun Xu & Ping Hu & Peng Huang, 2014. "Vardeman, S. B. and Morris, M. D. (2013), "Majority Voting by Independent Classifiers can Increase Error Rates," The American Statistician, 67, 94-96: Comment by Baker, Xu, Hu, and Huang and," The American Statistician, Taylor & Francis Journals, vol. 68(2), pages 125-126, May.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:sae:medema:v:39:y:2019:i:2:p:130-136. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: SAGE Publications (email available below). General contact details of provider: .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.