Author
Listed:
- Vishal Gupta
(Data Science and Operations, Marshall School of Business, University of Southern California, Los Angles, California 90089)
- Michael Huang
(Data Science and Operations, Marshall School of Business, University of Southern California, Los Angles, California 90089)
- Paat Rusmevichientong
(Data Science and Operations, Marshall School of Business, University of Southern California, Los Angles, California 90089)
Abstract
Motivated by the poor performance of cross-validation in settings where data are scarce, we propose a novel estimator of the out-of-sample performance of a policy in data-driven optimization. Our approach exploits the optimization problem’s sensitivity analysis to estimate the gradient of the optimal objective value with respect to the amount of noise in the data and uses the estimated gradient to debias the policy’s in-sample performance. Unlike cross-validation techniques, our approach avoids sacrificing data for a test set and uses all data when training and hence is well suited to settings where data are scarce. We prove bounds on the bias and variance of our estimator for optimization problems with uncertain linear objectives but known, potentially nonconvex, feasible regions. For more specialized optimization problems where the feasible region is “weakly coupled” in a certain sense, we prove stronger results. Specifically, we provide explicit high-probability bounds on the error of our estimator that hold uniformly over a policy class and depends on the problem’s dimension and policy class’s complexity. Our bounds show that under mild conditions, the error of our estimator vanishes as the dimension of the optimization problem grows, even if the amount of available data remains small and constant. Said differently, we prove our estimator performs well in the small-data, large-scale regime. Finally, we numerically compare our proposed method to state-of-the-art approaches through a case-study on dispatching emergency medical response services using real data. Our method provides more accurate estimates of out-of-sample performance and learns better-performing policies.
Suggested Citation
Vishal Gupta & Michael Huang & Paat Rusmevichientong, 2024.
"Debiasing In-Sample Policy Performance for Small-Data, Large-Scale Optimization,"
Operations Research, INFORMS, vol. 72(2), pages 848-870, March.
Handle:
RePEc:inm:oropre:v:72:y:2024:i:2:p:848-870
DOI: 10.1287/opre.2022.2377
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:oropre:v:72:y:2024:i:2:p:848-870. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.