Author
Listed:
- Subrota Kumar Mondal
(School of Computer Science and Engineering, Macau University of Science and Technology, Taipa, Macau 999078, China
These authors contributed equally to this work.)
- Xiaohai Wu
(School of Computer Science and Engineering, Macau University of Science and Technology, Taipa, Macau 999078, China
These authors contributed equally to this work.)
- Hussain Mohammed Dipu Kabir
(Deakin University, Geelong, VIC 3216, Australia)
- Hong-Ning Dai
(Department of Computer Science, Hong Kong Baptist University, Hong Kong, China)
- Kan Ni
(School of Computer Science and Engineering, Macau University of Science and Technology, Taipa, Macau 999078, China)
- Honggang Yuan
(Software Engineering Institute, East China Normal University, Shanghai 200062, China)
- Ting Wang
(Software Engineering Institute, East China Normal University, Shanghai 200062, China)
Abstract
Most enterprise customers now choose to divide a large monolithic service into large numbers of loosely-coupled, specialized microservices, which can be developed and deployed separately. Docker, as a light-weight virtualization technology, has been widely adopted to support diverse microservices. At the moment, Kubernetes is a portable, extensible, and open-source orchestration platform for managing these containerized microservice applications. To adapt to frequently changing user requests, it offers an automated scaling method, Horizontal Pod Autoscaler (HPA), that can scale itself based on the system’s current workload. The native reactive auto-scaling method, however, is unable to foresee the system workload scenario in the future to complete proactive scaling, leading to QoS (quality of service) violations, long tail latency, and insufficient server resource usage. In this paper, we suggest a new proactive scaling scheme based on deep learning approaches to make up for HPA’s inadequacies as the default autoscaler in Kubernetes. After meticulous experimental evaluation and comparative analysis, we use the Gated Recurrent Unit (GRU) model with higher prediction accuracy and efficiency as the prediction model, supplemented by a stability window mechanism to improve the accuracy and stability of the prediction model. Finally, with the third-party custom autoscaling framework, Custom Pod Autoscaler (CPA), we packaged our custom autoscaling algorithm into a framework and deployed the framework into the real Kubernetes cluster. Comprehensive experiment results prove the feasibility of our autoscaling scheme, which significantly outperforms the existing Horizontal Pod Autoscaler (HPA) approach.
Suggested Citation
Subrota Kumar Mondal & Xiaohai Wu & Hussain Mohammed Dipu Kabir & Hong-Ning Dai & Kan Ni & Honggang Yuan & Ting Wang, 2023.
"Toward Optimal Load Prediction and Customizable Autoscaling Scheme for Kubernetes,"
Mathematics, MDPI, vol. 11(12), pages 1-30, June.
Handle:
RePEc:gam:jmathe:v:11:y:2023:i:12:p:2675-:d:1169597
Download full text from publisher
Citations
Citations are extracted by the
CitEc Project, subscribe to its
RSS feed for this item.
Cited by:
- Sérgio N. Silva & Mateus A. S. de S. Goldbarg & Lucileide M. D. da Silva & Marcelo A. C. Fernandes, 2024.
"Application of Fuzzy Logic for Horizontal Scaling in Kubernetes Environments within the Context of Edge Computing,"
Future Internet, MDPI, vol. 16(9), pages 1-20, September.
- Subrota Kumar Mondal & Zhen Zheng & Yuning Cheng, 2024.
"On the Optimization of Kubernetes toward the Enhancement of Cloud Computing,"
Mathematics, MDPI, vol. 12(16), pages 1-26, August.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:11:y:2023:i:12:p:2675-:d:1169597. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.