Transforming Software Testing in the US: Generative AI Models for Realistic User Simulation
Author
Abstract
Suggested Citation
Download full text from publisher
References listed on IDEAS
- Abhijit Gosavi, 2009. "Reinforcement Learning: A Tutorial Survey and Recent Advances," INFORMS Journal on Computing, INFORMS, vol. 21(2), pages 178-192, May.
- John D. Sterman, 1987. "Testing Behavioral Simulation Models by Direct Experiment," Management Science, INFORMS, vol. 33(12), pages 1572-1592, December.
Citations
Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
Cited by:
- Mohammed Majid Bakhsh & Md Shaikat Alam Joy & Gazi Touhidul Alam, 2024. "Revolutionizing BA-QA Team Dynamics: AI-Driven Collaboration Platforms for Accelerated Software Quality in the US Market," Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023, Open Knowledge, vol. 7(01), pages 63-76.
- Sandeep Pochu & Sai Rama Krishna Nersu & Srikanth Reddy Kathram, 2024. "Zero Trust Principles in Cloud Security: A DevOps Perspective," Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023, Open Knowledge, vol. 6(1), pages 660-671.
- Sandeep Pochu & Sai Rama Krishna Nersu & Srikanth Reddy Kathram, 2024. "Enhancing Cloud Security with Automated Service Mesh Implementations in DevOps Pipelines," Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023, Open Knowledge, vol. 7(01), pages 90-103.
- Sandeep Pochu & Sai Rama Krishna Nersu & Srikanth Reddy Kathram, 2024. "Multi-Cloud DevOps Strategies: A Framework for Agility and Cost Optimization," Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023, Open Knowledge, vol. 7(01), pages 104-119.
- Dr. Alejandro García, 2024. "AI at the Crossroads of Health and Society: Emerging Paradigms," Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023, Open Knowledge, vol. 7(01), pages 150-160.
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Voelkel, Michael A. & Sachs, Anna-Lena & Thonemann, Ulrich W., 2020. "An aggregation-based approximate dynamic programming approach for the periodic review model with random yield," European Journal of Operational Research, Elsevier, vol. 281(2), pages 286-298.
- Yinhe Bu & Xingping Zhang, 2021. "On the Way to Integrate Increasing Shares of Variable Renewables in China: Experience from Flexibility Modification and Deep Peak Regulation Ancillary Service Market Based on MILP-UC Programming," Sustainability, MDPI, vol. 13(5), pages 1-22, February.
- Fang, Jianhao & Hu, Weifei & Liu, Zhenyu & Chen, Weiyi & Tan, Jianrong & Jiang, Zhiyu & Verma, Amrit Shankar, 2022. "Wind turbine rotor speed design optimization considering rain erosion based on deep reinforcement learning," Renewable and Sustainable Energy Reviews, Elsevier, vol. 168(C).
- Dieter Hendricks & Diane Wilcox, 2014. "A reinforcement learning extension to the Almgren-Chriss model for optimal trade execution," Papers 1403.2229, arXiv.org.
- Stephan Billinger & Kannan Srikanth & Nils Stieglitz & Terry R. Schumacher, 2021. "Exploration and exploitation in complex search tasks: How feedback influences whether and where human agents search," Strategic Management Journal, Wiley Blackwell, vol. 42(2), pages 361-385, February.
- Son, Joong Y. & Sheu, Chwen, 2008. "The impact of replenishment policy deviations in a decentralized supply chain," International Journal of Production Economics, Elsevier, vol. 113(2), pages 785-804, June.
- Huy Chau & Duy Nguyen & Thai Nguyen, 2024. "Continuous-time optimal investment with portfolio constraints: a reinforcement learning approach," Papers 2412.10692, arXiv.org.
- Wang, Xianjia & Yang, Zhipeng & Liu, Yanli & Chen, Guici, 2023. "A reinforcement learning-based strategy updating model for the cooperative evolution," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 618(C).
- Bunn, Derek W. & Oliveira, Fernando S., 2016. "Dynamic capacity planning using strategic slack valuation," European Journal of Operational Research, Elsevier, vol. 253(1), pages 40-50.
- Akshaj Tammewar & Nikita Chaudhari & Bunny Saini & Divya Venkatesh & Ganpathiraju Dharahas & Deepali Vora & Shruti Patil & Ketan Kotecha & Sultan Alfarhood, 2023. "Improving the Performance of Autonomous Driving through Deep Reinforcement Learning," Sustainability, MDPI, vol. 15(18), pages 1-18, September.
- Gencer, Busra & van Ackere, Ann, 2021. "Achieving long-term renewable energy goals: Do intermediate targets matter?," Utilities Policy, Elsevier, vol. 71(C).
- Federico Cosenz & Guido Noto, 2016. "Applying System Dynamics Modelling to Strategic Management: A Literature Review," Systems Research and Behavioral Science, Wiley Blackwell, vol. 33(6), pages 703-741, November.
- Paich, Mark. & Sterman, John., 1992. "Boom, bust and failures to learn in experimental markets," Working papers 3441-92., Massachusetts Institute of Technology (MIT), Sloan School of Management.
- Andreas Rauh & Marit Lahme & Oussama Benzinane, 2022. "A Comparison of the Use of Pontryagin’s Maximum Principle and Reinforcement Learning Techniques for the Optimal Charging of Lithium-Ion Batteries," Clean Technol., MDPI, vol. 4(4), pages 1-21, December.
- Puwei Lu & Wenkai Huang & Junlong Xiao & Fobao Zhou & Wei Hu, 2021. "Adaptive Proportional Integral Robust Control of an Uncertain Robotic Manipulator Based on Deep Deterministic Policy Gradient," Mathematics, MDPI, vol. 9(17), pages 1-16, August.
- Stephen J. Mezias & Mary Ann Glynn, 1993. "The three faces of corporate renewal: Institution, revolution, and evolution," Strategic Management Journal, Wiley Blackwell, vol. 14(2), pages 77-101, February.
- Jia, Liangyue & Hao, Jia & Hall, John & Nejadkhaki, Hamid Khakpour & Wang, Guoxin & Yan, Yan & Sun, Mengyuan, 2021. "A reinforcement learning based blade twist angle distribution searching method for optimizing wind turbine energy power," Energy, Elsevier, vol. 215(PA).
- Chan, Chi Kin & Lee, H.W.J. & Wong, K.H., 2008. "Optimal feedback production for a two-level supply chain," International Journal of Production Economics, Elsevier, vol. 113(2), pages 619-625, June.
- Fu, Lingxian & Tang, Jie & Meng, Fanyong, 2021. "A disease transmission inspired closed-loop supply chain dynamic model for product collection," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 152(C).
- John W. Boudreau, 2004. "50th Anniversary Article: Organizational Behavior, Strategy, Performance, and Design in Management Science," Management Science, INFORMS, vol. 50(11), pages 1463-1476, November.
More about this item
Keywords
Generative AI; Reinforcement Learning (RL); User Simulation; Software Testing; US QA Landscape;All these keywords.
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:das:njaigs:v:6:y:2024:i:1:p:635-659:id:292. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Open Knowledge (email available below). General contact details of provider: https://newjaigs.com/index.php/JAIGS/ .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.