Round-Robin Algorithm in Load Balancing for National Data Centers

ABSTRACT

The Provincial Government of Bali assumes a crucial role in administering various public service applications to meet the requirements of its community, traditional villages, and regional apparatus.Nevertheless, the escalating magnitude of traffic and uneven distribution of requests have resulted in substantial server burdens, which may jeopardize the operation of applications and heighten the likelihood of downtime.Ensuring efficient load distribution is of utmost importance in tackling these difficulties, and the Round Robin algorithm is often utilized for this purpose.However, the current body of research has not extensively examined the distinct circumstances surrounding on-premise servers in the Bali Provincial Government.The primary objective of this study is to address the significant gap in knowledge by conducting a comprehensive evaluation of the Round Robin algorithm's effectiveness in load-balancing on-premise servers inside the Bali Provincial Government.The primary objective of our study is to assess the appropriateness of the algorithm within the given context, with the ultimate goal of providing practical and implementable suggestions.The observations above can optimize system efficiency and minimize periods of inactivity, thereby enhancing the provision of vital public services across Bali.This study provides essential insights for enhancing server infrastructure and load-balancing strategies through empirical evaluation and comprehensive analysis.Its findings are valuable for the Bali Provincial Government and serve as a reference for other organizations facing challenges managing server loads.This study signifies a notable advancement in establishing reliable and practical public service applications within Bali.
distribution [2].However, a significant gap exists: no existing research evaluating the performance of the Round Robin algorithm specifically within the Bali Provincial Government's on-premise server context exists.This work undertakes a novel and groundbreaking investigation to address a significant gap in load balancing.The primary aim of this study is to examine the efficacy of the Round Robin algorithm within the specific context of on-premise servers used by the province government of Bali.This particular domain has been noticeably underrepresented in previous research efforts.
Our research aims to offer the Bali Provincial Government carefully crafted recommendations based on rigorous information and specifically customized to their distinct server environment.This study gives particular attention to assessing and examining the round-robin methodology.This study aims to precisely construct a framework that maximizes the operational efficiency of the National Data Centers managed by the Bali Provincial Government.This work distinguishes itself via its innovative approach, as it explores hitherto unexplored domains to tackle the urgent requirement for server optimization within a specific and intricate realworld context.Through a thorough examination of the Round Robin algorithm's appropriateness for this unique context, our objective is to offer fresh perspectives and remedies that can be utilized not only by the Bali Provincial Government but also serve as a valuable point of reference for comparable entities grappling with comparable obstacles in their management of server infrastructure.This research has the potential to impact the domain of load balancing and server optimization substantially, hence facilitating the development of more efficient and robust server environments in the coming years.

II. Method
The research design commences with the collection of data, which is subsequently followed by the formulation of test cases, the testing of these test cases, and the analysis of the obtained test results [3][4][5] [6].The initial phase entails the identification of the system environment and infrastructure that will undergo testing, the collection of pertinent information regarding the application to be evaluated, and the establishment of the test's objectives and requirements.The subsequent phase involves the formulation of test cases for every test scenario, with the objective of including all crucial facets of the application and system environment.The third phase entails the execution of test cases based on specified scenarios, the documentation of test outcomes, and the verification of their alignment with the anticipated results.The concluding phase involves the examination of the test outcomes and their comparison with the objectives and requirements of the test.This process entails the identification of any issues or flaws in the application or system environment, followed by the implementation of the requisite enhancements or optimizations.In general, the study design is implemented to assure the systematic and rigorous execution of tests, hence generating dependable outcomes that can be utilized for the advancement of system development.The processes outlined in Figure 1 provide a more comprehensive and deep understanding.
From Figure 1, in more detail, the research steps are described as follows.First, the data collection stage, is one of the initial stages in conducting research.Data collection is carried out to collect information and data relevant to the research problem to be solved.In this stage, the method of data collection that will be carried out is a literature study and interviews.Second steps is preparation of test cases.The test case preparation stage is essential in load-balancing research using the Round Robin algorithm on an on-premise server in the Bali provincial government.This stage aims to create a series of test cases that are used to test the performance of the Round Robin algorithm under various conditions.Next step is test case testing, at this stage, the researcher will test several test cases that have been prepared previously in the test case preparation stage.From the results of the test cases, it is hoped that information will be obtained about the performance and suitability of the Round Robin algorithm in an on-premise server environment at the Bali Provincial government so that it can provide recommendations to the Bali Provincial government regarding the most suitable loadbalancing algorithm to use.In the last stage, results of the test case test that have been carried out before will be analyzed in depth to determine the performance of the Round Robin algorithm in load-balancing on the server on-premise of the Bali Provincial government.This analysis will include an evaluation of load testing, failover testing, robustness testing, and security testing.Based on the results of the analysis from this stage, the researcher will conclude the advantages and disadvantages of the Round Robin algorithm in load-balancing on the server on the premise of the Bali Provincial Government and provide recommendations regarding the most suitable load-balancing algorithm used in an onpremise server environment.

III. Result and Discussion
This study aims to analyze the performance of the round-robin algorithm in a load balancer.In this study, we tested the performance of the round-robin algorithm in selecting the destination server for each incoming request.
The Test Case will use standard testing from Grafana Labs K6 [7][8] [9][10] [11][12] [13].Based on Grafana Labs documentation, K6 is an open-source load balancer testing tool that simplifies and increases performance testing productivity for cloud technicians and engineers.The following tests will be carried out based on the Grafana Labs K6 standard.
The assessment of a load balancer's performance under conditions that roughly resemble its regular workday load is a crucial benchmark, also referred to as average-load testing.This testing method offers significant insights into the ability of the load balancer to achieve its performance targets during regular operations continuously [14]   The insights obtained by doing Average-load testing provide a practical understanding of the load balancer's capacity to manage the routine demands it faces effectively.Through simulating common use patterns, researchers can get a more comprehensive knowledge of the load balancer's performance within a context that closely aligns with its practical reality.Understanding this information is crucial in guaranteeing that the load balancer can continuously and effectively fulfill the requirements of the systems it assists throughout regular operations, hence improving the system's overall stability and user satisfaction.
The stress testing process, as seen in Figure 3, is a crucial stage in assessing the resilience and performance of a load balancer under extreme loads that exceed standard usage patterns.This testing methodology comprehensively evaluates the load balancer's capacity to uphold system stability and consistent reliability while subjected to intense stress levels [19]  Figure 3 provides a visual representation of the stress testing results, effectively illustrating the responsiveness of the load balancer under extreme conditions.This scenario's purpose is to impose excessive demands on the system deliberately, so replicating instances of high usage or unanticipated surges in traffic to identify vulnerabilities, bottlenecks, or possible failure sites.
By putting the load balancer to these increased conditions, researchers can obtain vital insights regarding its resilience and ability to manage unexpected increases in user activity, ensuring the continuous provision of services.The stress testing process is of utmost importance in enhancing the load balancer's performance characteristics, ensuring its ability to withstand and remain robust in highly demanding use scenarios.These insights are crucial for enterprises aiming to uphold high availability and ensure smooth user experiences, particularly in times of increased demand or unforeseen swings in traffic.
The process of breakpoint testing, as illustrated in Figure 4, is a critical undertaking aimed at precisely identifying the underlying constraints inside a system.The justification for doing breakpoint testing is complex and involves a range of compelling factors, all of which contribute considerably to the overall durability and strength of the system [28]  Primarily, breakpoint testing plays a crucial role in proactive planning.Organizations can obtain valuable insights into the operating boundaries of the system by intentionally submitting the load balancer to progressively significant loads until it approaches its breakpoint.This information is the basis for developing thorough remediation techniques for load balancer failures or catastrophic system overloads.With this understanding, companies can establish predetermined measures for mitigating risks, resulting in decreased periods of inactivity, limited service disruptions, and the assurance of a prompt and efficient reaction to obstacles.
Moreover, the utilization of breakpoint testing is of utmost importance in the process of protocol creation.This capability enables businesses to optimize response protocols by refining the methods and procedures necessary to address prospective challenges.The use of a proactive strategy is crucial in the identification and preventative resolution of vulnerabilities, hence enhancing the overall dependability and stability of the system.
It is essential to acknowledge that breakpoint testing is a methodical and regulated procedure.The demand is gradually augmented until the load balancer nears its breakpoint, at this juncture, the test is manually terminated to mitigate any potential server harm.This cautious strategy guarantees the system's reliability while allowing enterprises to collect vital data about system performance and constraints.
Breakpoint testing is fundamentally a strategic endeavor that enhances system resilience and optimizes performance.This technology enables enterprises to effectively manage the intricacies of load balancing, instilling them with a sense of assurance in their ability to address obstacles proactively, mitigate interruptions, and provide uninterrupted service quality to their consumers.
Before conducting the test, a scenario will be created for each test case.The test scenarios are Average-load testing, stress testing, and breakpoint testing.
In the context of load testing, our objective is to accurately simulate the dynamic patterns of user interactions with the load balancer through a meticulously constructed average-load testing scenario.This process is conducted with a high degree of control and methodical precision.The initial stage, which involves the progressive inclusion of people individually over 5 minutes, closely resembles the natural accumulation of user engagement during typical usage.This phase enables a detailed observation of the load balancer's response to incremental requests, enabling an assessment of its capacity to effectively distribute resources and sustain minimal delay as the number of users progressively increases.Additionally, this provides valuable information regarding the load balancer's handling of the initial surge of connections, which is a critical factor in guaranteeing a smooth user experience during times of increased demand.
The succeeding step involves the simultaneous engagement of 100 users with the load balancer for 10 minutes, which acts as a critically significant stress test.This rigorous phase simulates situations in which the system becomes overwhelmed due to abrupt increases in traffic, such as the dissemination of viral content or the execution of marketing campaigns.By putting the load balancer to a period of high demand, we can evaluate its capacity to effectively manage substantial workloads while ensuring optimal performance, uptime, and resource allocation.This phase assesses not only the technical capabilities of the load balancer but also its ability to maintain service quality under challenging circumstances, therefore mitigating the risk of service interruptions during periods of high demand [37].
Figure 5 provides a comprehensive visual depiction of the dynamic scenario, effectively illustrating the entire testing procedure, facilitating comprehension of the many stages of testing and serving as a framework for interpreting and analyzing results.By employing carefully designed testing scenarios and utilizing visual aids, businesses can obtain an in-depth understanding of the load balancer's functionality, enabling them to make informed decisions based on data analysis, aiming to improve system performance and resilience.Figure 6 represents a pivotal juncture when intentional and significant pressure is applied to the load balancer, resulting in a massive surge of incoming traffic.The simulation provided in this study aims to recreate real-world scenarios where sudden and quick surges in user activity can place substantial pressure on the system's resources and capabilities.The initiation of stress testing entails users systematically consecutively accessing the load balancer for an extended duration of 10 minutes, progressively augmenting the user count to a substantial aggregate of 200 individuals.The progressive incorporation of users underscores the load balancer's capacity to adjust to an everexpanding user load while maintaining consistent performance metrics [38].Once the user count exceeds the critical threshold of 200, the situation transitions into a phase marked by a prolonged duration of heightened demand, wherein intensive demands persist for 10 minutes.This phase replicates scenarios involving high stress levels, during which system resources are entirely used.During this phase, a comprehensive analysis is performed on critical performance indicators, encompassing reaction times, resource utilization, and error rates.The data produced presents valuable information into the load balancer's capacity to effectively manage heavy workloads while maintaining service quality at a satisfactory level [39].
The culmination of the stress testing scenario occurs when users systematically complete their requests within a 5-minute timeframe, resulting in a gradual decrease in user burden.The phenomenon that has been noticed demonstrates a decline in user involvement that naturally occurs after increased demand.This observation offers valuable insights about the load balancer's capacity to manage the reduction in incoming requests efficiently.Organizations can enhance their comprehension of the load balancer's performance in high-stress conditions by employing visual representations of stress testing scenarios.These insights are of paramount importance for companies seeking to enhance the resilience of their systems against unexpected surges in user traffic and ensure uninterrupted service delivery, especially in peak demand.
The Break Point Test scenario, as illustrated in Figure 7, is a critical stage within our extensive testing protocol.This scenario aims to methodically evaluate the capabilities and thresholds of the load balancer, especially when confronted with a continuous and substantial increase in user traffic.The process commences with a notable influx of 20,000 users consistently visiting the load balancer, persistently exerting pressure on its capacities until a threshold is reached.This phase aims to determine the specific threshold at which the load balancer's performance begins to deteriorate or is compromised when subjected to high-load situations [40].In order to do thorough examinations and verify the results, we utilize the advanced Grafana Labs K6 testing tool.This tool facilitates the precise execution of tests, ensuring adherence to specified scenarios that faithfully replicate real-world usage patterns.The use of Grafana Labs K6 guarantees that both control and representation of genuine user behavior characterize our testing methodology, allows us to extract significant insights into the load balancer's performance in diverse scenarios.
In addition, the testing procedure on the server side is closely monitored by utilizing Kibana Data Analytics tools.Using a dual-monitoring technique functions as a reliable validation mechanism, enabling cross-referencing and verifying the outcomes derived by Grafana Labs K6.By utilizing the sophisticated analytics features of Kibana, a comprehensive understanding of the load balancer's performance can be obtained, including evaluating resource use, response times, and error rates [41].
The testing protocol we employ is characterized by its rigorous nature, resulting in substantial data and valuable insights.These findings are meticulously arranged and effectively communicated through a collection of tables.The tables above encompass the Test Results in Average-load Testing (Table 1), Test Results of Stress Testing (Table 2), and Breakpoint Testing Results (Table 3).Utilizing a tabular format facilitates the seamless comparison and analysis of crucial performance parameters, enabling the process of making well-informed decisions and implementing optimization methods for the load balancer and its connected systems.
Table 1 provides a comprehensive analysis demonstrating the constant and reliable performance of the Round Robin algorithm across several vital parameters.Notably, this algorithm exhibits exceptional proficiency in processing HTTP requests and establishing secure connections.The consistent capacity to produce expected outcomes highlights its appropriateness for the server environment of the Bali Provincial Government, where the utmost importance is placed on stability and dependability.On the other hand, the IP Hash algorithm demonstrates its advantages in data transmission rates and its ability to handle a larger number of requests per second effectively.These characteristics make it appealing when the primary focus is on swift data delivery.Nevertheless, it is essential to acknowledge that compromises in other aspects of performance accompany these benefits.
The data obtained from these experiments provides a comprehensive understanding of the performance of both methods, demonstrating distinct strengths in various load-balancing aspects.Although both Round Robin and IP Hash have their advantages, the predominant data indicates that Round-Robin's constant and dependable performance establishes it as the preferable option inside the server ecology of the Bali Provincial Government.Nevertheless, it is essential to consider the individual deployment and use-case needs, as they may necessitate a more nuanced conclusion.Therefore, more research should be conducted to examine these aspects and offer more customized advice the government's servers.
The findings reported in Table 2 demonstrate the superior performance of the round-robin algorithm compared to the IP Hash technique across all critical performance criteria.Significantly, the Round Robin algorithm demonstrates exceptional performance in connection times, request lengths, and overall efficiency in effectively handling HTTP requests.Consistent superior outcomes across all crucial factors establish Round Robin as the optimum solution for enhancing performance in the tested setting.In sharp contrast, despite its higher overall data processing and repetition, the IP Hash algorithm has a significantly distinct profile.It is characterized by significantly reduced speeds, prolonged waiting periods, and an increased frequency of request failures.The deficiencies above prove the system's constraints in providing expeditious and prompt service, a crucial factor for consumers in a rapidly evolving digital environment.
The information presented in Table 3 is quite explicit and without significant ambiguity.Due to its demonstrated dependability and effectiveness in managing key activities, the round-robin scheduling algorithm is unequivocally favored for optimizing performance within the specific context under examination.Nevertheless, it is advisable to consider the precise operational demands and use circumstances since these factors may need a more intricate decision-making process when deploying load-balancing solutions.It is imperative to do more investigation into these intricate situations to offer complete and customized suggestions for selecting an ideal load balancer.The results shown in Table 3 highlight the Round-Robin method's superior performance compared to the IP Hash alternative across several essential criteria.Significantly, the Round-Robin algorithm demonstrates exceptional proficiency in data transmission speed, the duration of requests, and the efficient execution of iterations.The constant and excellent performance of Round-Robin in these crucial areas makes it a tempting option for optimizing load balancing within the dataset under evaluation.
The IP Hash method demonstrates notable strengths in specific measures such as request blocking and connection delays.However, the Round-Robin algorithm emerges as the most advantageous option when examining the overall performance profile.The selection between the two algorithms is contingent upon the particular priorities and exigencies of the given use case since each method possesses distinct strengths and trade-offs.
Based on the extensive data provided, it can be concluded that the Round-Robin algorithm has superior efficiency across all dimensions, rendering it a highly appealing alternative for enterprises aiming to optimize their load-balancing techniques.Nonetheless, it is crucial to ensure that the choice of algorithm follows the unique performance goals and operational limitations of the given context, emphasizing the significance of customized approaches in the load-balancing domain.Additional inquiry and contextual analysis can potentially enhance this judgment's precision significantly.

IV. Conclusion
Through our extensive examination of the IP Hash and Round Robin algorithms, we have garnered significant insights that can contribute to advancing future research endeavors and provide practical guidance for their implementation.Concerning data transfer rates, the IP Hash method demonstrated a marginal superiority based on the average outcomes of the conducted tests.Nevertheless, Round-Robin has shown to be a more reliable option, especially in terms of managing HTTP requests and secure connections.The stability and dependability of Round-Robin were further emphasized during stress testing, as it continually surpassed IP Hash across several performance parameters.Significantly, Round Robin exhibited enhanced connection times, request durations, and overall efficiency in managing HTTP requests.In the breakpoint test, the level of competition between the two algorithms was more evenly balanced.Both IP Hash and Round-Robin algorithms handled comparable data amounts.However, Round Robin exhibited superior data transmission rates.Although IP Hash showed superior performance in request blocking and connection delays, Round Robin once again showcased its proficiency in the crucial realm of HTTP request handling and iteration processing rates.Upon examining the collective results obtained from the three tests, it becomes apparent that the Round-Robin algorithm exhibits superior performance, consistency, and reliability compared to the IP Hash method.Although IP Hash showed capabilities in certain areas, Round-Robin consistently beat it across a broader range of performance criteria.
When businesses or entities are confronted with the decision between these two algorithms in prospective research, it is highly recommended that Round-Robin be given significant consideration due to its equitable and efficient performance, which holds particular significance when the utmost importance is placed on maintaining consistency and reliability.Nevertheless, it is important to acknowledge that individual use cases and distinct requirements may influence the final selection.Hence, it is recommended that future research endeavors undertake a more comprehensive investigation of these particular cases to offer additional insights and recommendations for the selection and implementation of algorithms.

Figure 2
Figure2visually depicts the outcomes derived from the Average-load testing, illustrating our findings.The presented testing scenario portrays an environment that exhibits a typical workload, which closely resembles the load balancer's actual use during regular weekdays.Moreover, it illustrates a moderate labor duration, providing insight into the time required to handle and allocate incoming requests effectively.

Table 1 .
Test results in average-load testing

Table 2 .
Test results of stress testing

Table 3 .
Breakpoint testing results