IDEAS home Printed from https://ideas.repec.org/a/inm/orijds/v2y2023i2p161-182.html
   My bibliography  Save this article

Diversity Subsampling: Custom Subsamples from Large Data Sets

Author

Listed:
  • Boyang Shang

    (Industrial Engineering and Management Science, Northwestern University, Evanston, Illinois 60208)

  • Daniel W. Apley

    (Industrial Engineering and Management Science, Northwestern University, Evanston, Illinois 60208)

  • Sanjay Mehrotra

    (Industrial Engineering and Management Science, Northwestern University, Evanston, Illinois 60208)

Abstract

Subsampling from a large unlabeled (i.e., no response values are available yet) data set is useful in many supervised learning contexts to provide a global view of the data based on only a fraction of the observations. In this paper, we borrow concepts from the well-known sampling/importance resampling technique, which samples from a specified probability distribution, to develop a diversity subsampling approach that selects a subsample from the original data with no prior knowledge of its underlying probability distribution. The goal is to produce a subsample that is independently and uniformly distributed over the support of distribution from which the data are drawn, to the maximum extent possible. We give an asymptotic performance guarantee of the proposed method and provide experimental results to show that the proposed method performs well for typical finite-size data. We also compare the proposed method with competing diversity subsampling algorithms and demonstrate numerically that subsamples selected by the proposed method are closer to a uniform sample than subsamples selected by other methods. The proposed diversity subsampling (DS) algorithm is more efficient than known methods. It takes only a few minutes to select tens of thousands of subsample points from a data set of size one million. Our DS algorithm easily generalizes to select subsamples following distributions other than uniform. We provide a Python package (FADS) that implements the proposed method.

Suggested Citation

  • Boyang Shang & Daniel W. Apley & Sanjay Mehrotra, 2023. "Diversity Subsampling: Custom Subsamples from Large Data Sets," INFORMS Joural on Data Science, INFORMS, vol. 2(2), pages 161-182, October.
  • Handle: RePEc:inm:orijds:v:2:y:2023:i:2:p:161-182
    DOI: 10.1287/ijds.2022.00017
    as

    Download full text from publisher

    File URL: http://dx.doi.org/10.1287/ijds.2022.00017
    Download Restriction: no

    File URL: https://libkey.io/10.1287/ijds.2022.00017?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Chun-Wa Ko & Jon Lee & Maurice Queyranne, 1995. "An Exact Algorithm for Maximum Entropy Sampling," Operations Research, INFORMS, vol. 43(4), pages 684-691, August.
    2. Øivind Skare & Erik Bølviken & Lars Holden, 2003. "Improved Sampling‐Importance Resampling and Reduced Bias Importance Sampling," Scandinavian Journal of Statistics, Danish Society for Theoretical Statistics;Finnish Statistical Society;Norwegian Statistical Association;Swedish Statistical Association, vol. 30(4), pages 719-737, December.
    3. Mack, Y. P. & Rosenblatt, M., 1979. "Multivariate k-nearest neighbor density estimates," Journal of Multivariate Analysis, Elsevier, vol. 9(1), pages 1-15, March.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Hessa Al-Thani & Jon Lee, 2020. "An R Package for Generating Covariance Matrices for Maximum-Entropy Sampling from Precipitation Chemistry Data," SN Operations Research Forum, Springer, vol. 1(3), pages 1-21, September.
    2. Kurt M. Anstreicher, 2018. "Maximum-entropy sampling and the Boolean quadric polytope," Journal of Global Optimization, Springer, vol. 72(4), pages 603-618, December.
    3. Zheng Li & Guannan Liu & Qi Li, 2017. "Nonparametric Knn estimation with monotone constraints," Econometric Reviews, Taylor & Francis Journals, vol. 36(6-9), pages 988-1006, October.
    4. Kung, Yi-Hung & Lin, Pei-Sheng & Kao, Cheng-Hsiung, 2012. "An optimal k-nearest neighbor for density estimation," Statistics & Probability Letters, Elsevier, vol. 82(10), pages 1786-1791.
    5. Goldengorin, Boris, 2009. "Maximization of submodular functions: Theory and enumeration algorithms," European Journal of Operational Research, Elsevier, vol. 198(1), pages 102-112, October.
    6. Chang, Fang & Qiu, Weiliang & Zamar, Ruben H. & Lazarus, Ross & Wang, Xiaogang, 2010. "clues: An R Package for Nonparametric Clustering Based on Local Shrinking," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 33(i04).
    7. Goldengorin, Boris & Tijssen, Gert A. & Tso, Michael, 1999. "The maximization of submodular functions : old and new proofs for the correctness of the dichotomy algorithm," Research Report 99A17, University of Groningen, Research Institute SOM (Systems, Organisations and Management).
    8. Theodore T. Allen & Olivia K. Hernand & Abdullah Alomair, 2020. "Optimal Off-line Experimentation for Games," Decision Analysis, INFORMS, vol. 17(4), pages 277-298, December.
    9. Zhongzhu Chen & Marcia Fampa & Jon Lee, 2023. "On Computing with Some Convex Relaxations for the Maximum-Entropy Sampling Problem," INFORMS Journal on Computing, INFORMS, vol. 35(2), pages 368-385, March.
    10. Kohtaro Hitomi & Masamune Iwasawa & Yoshihiko Nishiyama, 2018. "Rate Optimal Specification Test When the Number of Instruments is Large," KIER Working Papers 986, Kyoto University, Institute of Economic Research.
    11. Hino, Hideitsu & Koshijima, Kensuke & Murata, Noboru, 2015. "Non-parametric entropy estimators based on simple linear regression," Computational Statistics & Data Analysis, Elsevier, vol. 89(C), pages 72-84.
    12. Gery Geenens, 2014. "Probit Transformation for Kernel Density Estimation on the Unit Interval," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 109(505), pages 346-358, March.
    13. Runhuan Feng & Peng Li, 2021. "Sample Recycling Method -- A New Approach to Efficient Nested Monte Carlo Simulations," Papers 2106.06028, arXiv.org.
    14. repec:dgr:rugsom:99a17 is not listed on IDEAS
    15. Yildiz, Anil & Mern, John & Kochenderfer, Mykel J. & Howland, Michael F., 2023. "Towards sequential sensor placements on a wind farm to maximize lifetime energy and profit," Renewable Energy, Elsevier, vol. 216(C).
    16. Cheng, Philip E., 1995. "A note on strong convergence rates in nonparametric regression," Statistics & Probability Letters, Elsevier, vol. 24(4), pages 357-364, September.
    17. Linda Altieri & Daniela Cocchi, 2021. "Spatial Sampling for Non‐compact Patterns," International Statistical Review, International Statistical Institute, vol. 89(3), pages 532-549, December.
    18. Penrose, Mathew D., 2000. "Central limit theorems for k-nearest neighbour distances," Stochastic Processes and their Applications, Elsevier, vol. 85(2), pages 295-320, February.
    19. Gery Geenens & Arthur Charpentier & Davy Paindaveine, 2014. "Probit Transformation for Nonparametric Kernel Estimation of the Copula Density," Working Papers ECARES ECARES 2014-23, ULB -- Universite Libre de Bruxelles.
    20. Onur Genç & Ali Dağ, 2016. "A machine learning-based approach to predict the velocity profiles in small streams," Water Resources Management: An International Journal, Published for the European Water Resources Association (EWRA), Springer;European Water Resources Association (EWRA), vol. 30(1), pages 43-61, January.
    21. Lucio Barabesi, 2001. "Local parametric density estimation methods in line transect sampling," Metron - International Journal of Statistics, Dipartimento di Statistica, Probabilità e Statistiche Applicate - University of Rome, vol. 0(1-2), pages 22-38.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:orijds:v:2:y:2023:i:2:p:161-182. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.