IDEAS home Printed from https://ideas.repec.org/a/spr/queues/v105y2023i3d10.1007_s11134-023-09897-5.html
   My bibliography  Save this article

Join-Up-To(m): improved hyperscalable load balancing

Author

Listed:
  • Grzegorz Kielanski

    (University of Antwerp)

  • Tim Hellemans

    (University of Antwerp)

  • Benny Houdt

    (University of Antwerp)

Abstract

Various load balancing policies are known to achieve vanishing waiting times in the large-scale limit, that is, when the number of servers tends to infinity. These policies either require a communication overhead of one message per job or require job size information. Load balancing policies with an overhead below one message per job are called hyperscalable policies. While these policies often have bounded queue length in the large-scale limit and work well when the overhead is somewhat below one, they show poor performance when the communication overhead becomes small, that is, the mean response time tends to infinity when the overhead tends to zero even at low loads. In this paper, we introduce a hyperscalable load balancing policy, called Join-Up-To(m), that remains effective even when the communication overhead tends to zero. To study its performance under general job size distributions, we make use of the “queue at the cavity" approach. We provide explicit results for the first two moments of the response time, the generating function of the queue length distribution and the Laplace transform of the response time. These results show that the mean response time only depends on the first two moments of the job size distribution.

Suggested Citation

  • Grzegorz Kielanski & Tim Hellemans & Benny Houdt, 2023. "Join-Up-To(m): improved hyperscalable load balancing," Queueing Systems: Theory and Applications, Springer, vol. 105(3), pages 291-316, December.
  • Handle: RePEc:spr:queues:v:105:y:2023:i:3:d:10.1007_s11134-023-09897-5
    DOI: 10.1007/s11134-023-09897-5
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s11134-023-09897-5
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s11134-023-09897-5?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:queues:v:105:y:2023:i:3:d:10.1007_s11134-023-09897-5. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.