IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2405.19317.html
   My bibliography  Save this paper

Adaptive Generalized Neyman Allocation: Local Asymptotic Minimax Optimal Best Arm Identification

Author

Listed:
  • Masahiro Kato

Abstract

This study investigates a local asymptotic minimax optimal strategy for fixed-budget best arm identification (BAI). We propose the Adaptive Generalized Neyman Allocation (AGNA) strategy and show that its worst-case upper bound of the probability of misidentifying the best arm aligns with the worst-case lower bound under the small-gap regime, where the gap between the expected outcomes of the best and suboptimal arms is small. Our strategy corresponds to a generalization of the Neyman allocation for two-armed bandits (Neyman, 1934; Kaufmann et al., 2016) and a refinement of existing strategies such as the ones proposed by Glynn & Juneja (2004) and Shin et al. (2018). Compared to Komiyama et al. (2022), which proposes a minimax rate-optimal strategy, our proposed strategy has a tighter upper bound that exactly matches the lower bound, including the constant terms, by restricting the class of distributions to the class of small-gap distributions. Our result contributes to the longstanding open issue about the existence of asymptotically optimal strategies in fixed-budget BAI, by presenting the local asymptotic minimax optimal strategy.

Suggested Citation

  • Masahiro Kato, 2024. "Adaptive Generalized Neyman Allocation: Local Asymptotic Minimax Optimal Best Arm Identification," Papers 2405.19317, arXiv.org.
  • Handle: RePEc:arx:papers:2405.19317
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2405.19317
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Jinyong Hahn & Keisuke Hirano & Dean Karlan, 2011. "Adaptive Experimental Design Using the Propensity Score," Journal of Business & Economic Statistics, Taylor & Francis Journals, vol. 29(1), pages 96-108, January.
    2. Maximilian Kasy & Anja Sautmann, 2021. "Adaptive Treatment Assignment in Experiments for Policy Choice," Econometrica, Econometric Society, vol. 89(1), pages 113-132, January.
    3. Keisuke Hirano & Guido W. Imbens & Geert Ridder, 2003. "Efficient Estimation of Average Treatment Effects Using the Estimated Propensity Score," Econometrica, Econometric Society, vol. 71(4), pages 1161-1189, July.
    4. Kaito Ariu & Masahiro Kato & Junpei Komiyama & Kenichiro McAlinn & Chao Qin, 2021. "Policy Choice and Best Arm Identification: Asymptotic Analysis of Exploration Sampling," Papers 2109.08229, arXiv.org, revised Nov 2021.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Masahiro Kato, 2023. "Locally Optimal Fixed-Budget Best Arm Identification in Two-Armed Gaussian Bandits with Unknown Variances," Papers 2312.12741, arXiv.org, revised Mar 2024.
    2. Masahiro Kato & Masaaki Imaizumi & Takuya Ishihara & Toru Kitagawa, 2023. "Asymptotically Optimal Fixed-Budget Best Arm Identification with Variance-Dependent Bounds," Papers 2302.02988, arXiv.org, revised Jul 2023.
    3. Masahiro Kato, 2021. "Adaptive Doubly Robust Estimator from Non-stationary Logging Policy under a Convergence of Average Probability," Papers 2102.08975, arXiv.org, revised Mar 2021.
    4. Max Cytrynbaum, 2021. "Optimal Stratification of Survey Experiments," Papers 2111.08157, arXiv.org, revised Aug 2023.
    5. Masahiro Kato & Masaaki Imaizumi & Takuya Ishihara & Toru Kitagawa, 2022. "Best Arm Identification with Contextual Information under a Small Gap," Papers 2209.07330, arXiv.org, revised Jan 2023.
    6. Harrison H. Li & Art B. Owen, 2023. "Double machine learning and design in batch adaptive experiments," Papers 2309.15297, arXiv.org.
    7. Chao Qin & Daniel Russo, 2024. "Optimizing Adaptive Experiments: A Unified Approach to Regret Minimization and Best-Arm Identification," Papers 2402.10592, arXiv.org, revised Jul 2024.
    8. Bhattacharya, Debopam & Dupas, Pascaline, 2012. "Inferring welfare maximizing treatment assignment under budget constraints," Journal of Econometrics, Elsevier, vol. 167(1), pages 168-196.
    9. Jinglong Zhao, 2024. "Experimental Design For Causal Inference Through An Optimization Lens," Papers 2408.09607, arXiv.org, revised Aug 2024.
    10. Masahiro Kato & Kenshi Abe & Kaito Ariu & Shota Yasui, 2020. "A Practical Guide of Off-Policy Evaluation for Bandit Problems," Papers 2010.12470, arXiv.org.
    11. Jiang, Liang & Phillips, Peter C.B. & Tao, Yubo & Zhang, Yichong, 2023. "Regression-adjusted estimation of quantile treatment effects under covariate-adaptive randomizations," Journal of Econometrics, Elsevier, vol. 234(2), pages 758-776.
    12. Kyungchul Song, 2009. "Efficient Estimation of Average Treatment Effects under Treatment-Based Sampling," PIER Working Paper Archive 09-011, Penn Institute for Economic Research, Department of Economics, University of Pennsylvania.
    13. Liang Jiang & Oliver B. Linton & Haihan Tang & Yichong Zhang, 2022. "Improving Estimation Efficiency via Regression-Adjustment in Covariate-Adaptive Randomizations with Imperfect Compliance," Papers 2201.13004, arXiv.org, revised Jun 2023.
    14. Masahiro Kato & Kyohei Okumura & Takuya Ishihara & Toru Kitagawa, 2024. "Adaptive Experimental Design for Policy Learning," Papers 2401.03756, arXiv.org, revised Feb 2024.
    15. Masahiro Kato & Takuya Ishihara & Junya Honda & Yusuke Narita, 2020. "Efficient Adaptive Experimental Design for Average Treatment Effect Estimation," Papers 2002.05308, arXiv.org, revised Oct 2021.
    16. Yuehao Bai & Azeem M. Shaikh & Max Tabord-Meehan, 2024. "A Primer on the Analysis of Randomized Experiments and a Survey of some Recent Advances," Papers 2405.03910, arXiv.org.
    17. Masahiro Kato & Yusuke Kaneko, 2020. "Off-Policy Evaluation of Bandit Algorithm from Dependent Samples under Batch Update Policy," Papers 2010.13554, arXiv.org.
    18. Masahiro Kato, 2023. "Worst-Case Optimal Multi-Armed Gaussian Best Arm Identification with a Fixed Budget," Papers 2310.19788, arXiv.org, revised Mar 2024.
    19. Masahiro Kato, 2020. "Confidence Interval for Off-Policy Evaluation from Dependent Samples via Bandit Algorithm: Approach from Standardized Martingales," Papers 2006.06982, arXiv.org.
    20. Masahiro Kato & Akihiro Oga & Wataru Komatsubara & Ryo Inokuchi, 2024. "Active Adaptive Experimental Design for Treatment Effect Estimation with Covariate Choices," Papers 2403.03589, arXiv.org, revised Jun 2024.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2405.19317. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.