Self-learning Agents for Recommerce Markets
Author
Abstract
Suggested Citation
DOI: 10.1007/s12599-023-00841-8
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
References listed on IDEAS
- Rainer Schlosser & Martin Boissier, 2018. "Dealing with the Dimensionality Curse in Dynamic Pricing Competition: Using Frequent Repricing to Compensate Imperfect Market Anticipations," Papers 1809.02433, arXiv.org.
- Salinas, David & Flunkert, Valentin & Gasthaus, Jan & Januschowski, Tim, 2020. "DeepAR: Probabilistic forecasting with autoregressive recurrent networks," International Journal of Forecasting, Elsevier, vol. 36(3), pages 1181-1191.
- Walter R. Stahel, 2016. "The circular economy," Nature, Nature, vol. 531(7595), pages 435-438, March.
- Alexander Kastius & Rainer Schlosser, 2022. "Dynamic pricing under competition using reinforcement learning," Journal of Revenue and Pricing Management, Palgrave Macmillan, vol. 21(1), pages 50-63, February.
- Torsten J. Gerpott & Jan Berends, 2022. "Competitive pricing on online markets: a literature review," Journal of Revenue and Pricing Management, Palgrave Macmillan, vol. 21(6), pages 596-622, December.
- Ming Chen & Zhi-Long Chen, 2015. "Recent Developments in Dynamic Pricing Research: Multiple Products, Competition, and Limited Demand Information," Production and Operations Management, Production and Operations Management Society, vol. 24(5), pages 704-731, May.
- Schlosser, Rainer & Chenavaz, Régis Y. & Dimitrov, Stanko, 2021. "Circular economy: Joint dynamic pricing and recycling investments," International Journal of Production Economics, Elsevier, vol. 236(C).
- R. Schlosser & K. Richly, 2019. "Dynamic pricing under competition with data-driven price anticipations and endogenous reference price effects," Journal of Revenue and Pricing Management, Palgrave Macmillan, vol. 18(6), pages 451-464, December.
- Strauss, Arne K. & Klein, Robert & Steinhardt, Claudius, 2018. "A review of choice-based revenue management: Theory and methods," European Journal of Operational Research, Elsevier, vol. 271(2), pages 375-387.
- David Silver & Julian Schrittwieser & Karen Simonyan & Ioannis Antonoglou & Aja Huang & Arthur Guez & Thomas Hubert & Lucas Baker & Matthew Lai & Adrian Bolton & Yutian Chen & Timothy Lillicrap & Fan , 2017. "Mastering the game of Go without human knowledge," Nature, Nature, vol. 550(7676), pages 354-359, October.
- R. Canan Savaskan & Shantanu Bhattacharya & Luk N. Van Wassenhove, 2004. "Closed-Loop Supply Chain Models with Product Remanufacturing," Management Science, INFORMS, vol. 50(2), pages 239-252, February.
- Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
- Ruben Geer & Arnoud V. Boer & Christopher Bayliss & Christine S. M. Currie & Andria Ellina & Malte Esders & Alwin Haensel & Xiao Lei & Kyle D. S. Maclean & Antonio Martinez-Sykora & Asbjørn Nilsen Ris, 2019. "Dynamic pricing and learning with competition: insights from the dynamic pricing challenge at the 2017 INFORMS RM & pricing conference," Journal of Revenue and Pricing Management, Palgrave Macmillan, vol. 18(3), pages 185-203, June.
- Klein, Robert & Koch, Sebastian & Steinhardt, Claudius & Strauss, Arne K., 2020. "A review of revenue management: Recent generalizations and advances in industry applications," European Journal of Operational Research, Elsevier, vol. 284(2), pages 397-412.
- Syed A. M. Shihab & Peng Wei, 2022. "A deep reinforcement learning approach to seat inventory control for airline revenue management," Journal of Revenue and Pricing Management, Palgrave Macmillan, vol. 21(2), pages 183-199, April.
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Torsten J. Gerpott & Jan Berends, 2022. "Competitive pricing on online markets: a literature review," Journal of Revenue and Pricing Management, Palgrave Macmillan, vol. 21(6), pages 596-622, December.
- Schlosser, Rainer & Gönsch, Jochen, 2023. "Risk-averse dynamic pricing using mean-semivariance optimization," European Journal of Operational Research, Elsevier, vol. 310(3), pages 1151-1163.
- Anton, Ramona & Chenavaz, Régis Y. & Paraschiv, Corina, 2023. "Dynamic pricing, reference price, and price-quality relationship," Journal of Economic Dynamics and Control, Elsevier, vol. 146(C).
- Bo Hu & Jiaxi Li & Shuang Li & Jie Yang, 2019. "A Hybrid End-to-End Control Strategy Combining Dueling Deep Q-network and PID for Transient Boost Control of a Diesel Engine with Variable Geometry Turbocharger and Cooled EGR," Energies, MDPI, vol. 12(19), pages 1-15, September.
- Elsisi, Mahmoud & Amer, Mohammed & Dababat, Alya’ & Su, Chun-Lien, 2023. "A comprehensive review of machine learning and IoT solutions for demand side energy management, conservation, and resilient operation," Energy, Elsevier, vol. 281(C).
- Wang, Xuan & Wang, Rui & Jin, Ming & Shu, Gequn & Tian, Hua & Pan, Jiaying, 2020. "Control of superheat of organic Rankine cycle under transient heat source based on deep reinforcement learning," Applied Energy, Elsevier, vol. 278(C).
- Marlin W. Ulmer & Alan Erera & Martin Savelsbergh, 2022. "Dynamic service area sizing in urban delivery," OR Spectrum: Quantitative Approaches in Management, Springer;Gesellschaft für Operations Research e.V., vol. 44(3), pages 763-793, September.
- Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
- Boute, Robert N. & Gijsbrechts, Joren & van Jaarsveld, Willem & Vanvuchelen, Nathalie, 2022. "Deep reinforcement learning for inventory control: A roadmap," European Journal of Operational Research, Elsevier, vol. 298(2), pages 401-412.
- Parisa Famil Alamdar & Abbas Seifi, 2024. "Dynamic pricing of differentiated products under competition with reference price effects using a neural network-based approach," Journal of Revenue and Pricing Management, Palgrave Macmillan, vol. 23(6), pages 575-587, December.
- Lai, Jianfa & Weng, Lin-Chen & Peng, Xiaoling & Fang, Kai-Tai, 2022. "Construction of symmetric orthogonal designs with deep Q-network and orthogonal complementary design," Computational Statistics & Data Analysis, Elsevier, vol. 171(C).
- R. Schlosser & K. Richly, 2019. "Dynamic pricing under competition with data-driven price anticipations and endogenous reference price effects," Journal of Revenue and Pricing Management, Palgrave Macmillan, vol. 18(6), pages 451-464, December.
- Bandar Alkhayyal, 2019. "Corporate Social Responsibility Practices in the U.S.: Using Reverse Supply Chain Network Design and Optimization Considering Carbon Cost," Sustainability, MDPI, vol. 11(7), pages 1-22, April.
- Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
- Malte Reinschmidt & József Fortágh & Andreas Günther & Valentin V. Volchkov, 2024. "Reinforcement learning in cold atom experiments," Nature Communications, Nature, vol. 15(1), pages 1-11, December.
- Patrick Schroeder & Kartika Anggraeni & Uwe Weber, 2019. "The Relevance of Circular Economy Practices to the Sustainable Development Goals," Journal of Industrial Ecology, Yale University, vol. 23(1), pages 77-95, February.
- Lin Wang & Xingang Xu & Xuhui Zhao & Baozhu Li & Ruijuan Zheng & Qingtao Wu, 2021. "A randomized block policy gradient algorithm with differential privacy in Content Centric Networks," International Journal of Distributed Sensor Networks, , vol. 17(12), pages 15501477211, December.
- Zhang, Yihao & Chai, Zhaojie & Lykotrafitis, George, 2021. "Deep reinforcement learning with a particle dynamics environment applied to emergency evacuation of a room with obstacles," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 571(C).
- Jin, Jiahuan & Cui, Tianxiang & Bai, Ruibin & Qu, Rong, 2024. "Container port truck dispatching optimization using Real2Sim based deep reinforcement learning," European Journal of Operational Research, Elsevier, vol. 315(1), pages 161-175.
- Morlotti, Chiara & Mantin, Benny & Malighetti, Paolo & Redondi, Renato, 2024. "Price volatility of revenue managed goods: Implications for demand and price elasticity," European Journal of Operational Research, Elsevier, vol. 312(3), pages 1039-1058.
More about this item
Keywords
Recommerce; Dynamic pricing; Competition; Reinforcement learning; Market simulation; Sustainability;All these keywords.
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:binfse:v:66:y:2024:i:4:d:10.1007_s12599-023-00841-8. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.