Load Balancer
This section documents the results of NDS-239 - Getting issue details... STATUS . The goal of this ticket was to determine whether the Nginx ingress controller would be a performance bottleneck for the NDS Labs system.
Baseline service: Nginx
This test uses the nginx-ingress-controller as the loadbalancer and a simple Nginx webserver as the backend service. An ingress rule was created manually to map perf-nginx.cluster.ndslabs.org to the backend service.
Load generation: boom
Use the boom load test generator to scale up concurrent requests using a Nebula m1.large VM (8 VCPUs). The following script calls boom with increasing number of concurrent requests (-c in 100:1000) while also increasing the number of total requests (-n in 1000:10000).
for i in `seq 1 10` do con=$((100*$i)) req=$((1000*$i)) echo "bin/boom -cpus 4 -n $req -c $con http://perf-nginx.iassist.ndslabs.org/" bin/boom -cpus 4 -n $req -c $con http://perf-nginx.iassist.ndslabs.org/ sleep 1 done
Measuring latency and resource usage
Measuring latency: boom
The boom utility produces response time output including a summary of the average response time for each request as well as the distribution of response times and latency.
bin/boom -cpus 4 -n 8000 -c 800 http://perf-nginx.iassist.ndslabs.org/ Summary: Total: 3.4305 secs Slowest: 3.0162 secs Fastest: 0.0009 secs Average: 0.1335 secs Requests/sec: 2332.0068 Status code distribution: [200] 8000 responses Response time histogram: 0.001 [1] | 0.302 [7093] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎ 0.604 [371] |∎∎ 0.906 [4] | 1.207 [471] |∎∎ 1.509 [28] | 1.810 [0] | 2.112 [0] | 2.413 [0] | 2.715 [0] | 3.016 [32] | Latency distribution: 10% in 0.0111 secs 25% in 0.0183 secs 50% in 0.0305 secs 75% in 0.0554 secs 90% in 0.3304 secs 95% in 1.0200 secs 99% in 1.0767 sec
Below is a plot of average response time with increasing concurrent requests (-n 1000 requests) and replicas. Average response times increase as the number of concurrent requests increase, but still remain below 1 second. Adding more replicas does not have an apparent effect, suggesting that the response time is related to the ingress load-balancer, not the backend service.
Below is a plot of the latency distribution at 25%, 50%, 75%, and 95% of requests with increasing concurrent connections. So, up to 1000 concurrent connections, 75% of requests have latency < 0.1 seconds. Starting around 200 concurrent requests, 5% of requests have increasing latency – up to 1 second.
Measuring CPU/Memory utilization
Memory and CPU utilization was measured using pidstat. The nginx ingress controller has two worker threads in this test, labeled as proc1 and proc2 (process).
CPU utilization
The following table reports CPU utilization for each process during the boom test. %CPU peaks at 12%.
%usr | %system | %guest | %CPU | |||||
proc1 | proc2 | proc1 | proc2 | proc1 | proc2 | proc1 | proc2 | |
15:56:10 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
15:56:11 | 0 | 6 | 0 | 6 | 0 | 0 | 0 | 12 |
15:56:12 | 3 | 0 | 3 | 0 | 0 | 0 | 6 | 0 |
15:56:13 | 3 | 0 | 3 | 0 | 0 | 0 | 6 | 0 |
15:56:14 | 5 | 0 | 5 | 0 | 0 | 0 | 10 | 0 |
15:56:15 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 |
15:56:16 | 4 | 0 | 4 | 0 | 0 | 0 | 8 | 0 |
15:56:17 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 |
15:56:18 | 5 | 0 | 6 | 0 | 0 | 0 | 11 | 0 |
15:56:19 | 1 | 0 | 1 | 0 | 0 | 0 | 2 | 0 |
15:56:20 | 2 | 0 | 4 | 0 | 0 | 0 | 6 | 0 |
15:56:21 | 1 | 0 | 1 | 0 | 0 | 0 | 2 | 0 |
15:56:22 | 3 | 0 | 4 | 0 | 0 | 0 | 7 | 0 |
15:56:23 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15:56:24 | 0 | 4 | 0 | 5 | 0 | 0 | 0 | 9 |
15:56:25 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 2 |
15:56:26 | 0 | 5 | 1 | 6 | 0 | 0 | 1 | 11 |
15:56:27 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
15:56:28 | 4 | 0 | 6 | 0 | 0 | 0 | 10 | 0 |
15:56:29 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 |
15:56:30 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Memory utilization
The following table reports memory utilization for each process during the boom test. %MEM remains relatively stable throughout the test.
minflt/s | majflt/s | VSZ | RSS | %MEM | ||||||
proc1 | proc2 | proc1 | proc2 | proc1 | proc2 | proc1 | proc2 | proc1 | proc2 | |
15:56:52 | 0 | 0 | 0 | 0 | 326132 | 325992 | 15208 | 15068 | 0.38 | 0.37 |
15:56:53 | 0 | 0 | 0 | 0 | 326132 | 325992 | 15208 | 15068 | 0.38 | 0.37 |
15:56:54 | 3 | 0 | 0 | 0 | 326132 | 325992 | 15208 | 15068 | 0.38 | 0.37 |
15:56:55 | 29 | 299 | 0 | 0 | 325328 | 325992 | 14404 | 15068 | 0.36 | 0.37 |
15:56:56 | 0 | 477 | 0 | 0 | 325328 | 327576 | 14404 | 16360 | 0.36 | 0.4 |
15:56:57 | 0 | 0 | 0 | 0 | 325328 | 325768 | 14404 | 14844 | 0.36 | 0.37 |
15:56:58 | 0 | 648 | 0 | 0 | 325328 | 328416 | 14404 | 17216 | 0.36 | 0.42 |
15:56:59 | 0 | 0 | 0 | 0 | 325328 | 325328 | 14404 | 14404 | 0.36 | 0.36 |
15:57:00 | 0 | 1021 | 0 | 0 | 325328 | 329420 | 14404 | 18360 | 0.36 | 0.45 |
15:57:01 | 0 | 0 | 0 | 0 | 325328 | 326140 | 14404 | 15216 | 0.36 | 0.38 |
15:57:02 | 0 | 0 | 0 | 0 | 325328 | 326140 | 14404 | 15216 | 0.36 | 0.38 |
15:57:03 | 0 | 630 | 0 | 0 | 325328 | 326764 | 14404 | 15840 | 0.36 | 0.39 |
15:57:04 | 0 | 0 | 0 | 0 | 325328 | 325808 | 14404 | 14884 | 0.36 | 0.37 |
15:57:05 | 0 | 1002 | 0 | 0 | 325328 | 329908 | 14404 | 18840 | 0.36 | 0.46 |
15:57:06 | 0 | 47 | 0 | 0 | 325328 | 325628 | 14404 | 14704 | 0.36 | 0.36 |
15:57:07 | 1 | 1275 | 0 | 0 | 325328 | 330784 | 14404 | 19716 | 0.36 | 0.49 |
15:57:08 | 0 | 0 | 0 | 0 | 325328 | 325884 | 14404 | 14960 | 0.36 | 0.37 |
15:57:09 | 0 | 1502 | 0 | 0 | 325328 | 331960 | 14404 | 20756 | 0.36 | 0.51 |
15:57:10 | 0 | 0 | 0 | 0 | 325328 | 325328 | 14404 | 14404 | 0.36 | 0.36 |
15:57:11 | 0 | 1258 | 0 | 0 | 325328 | 329128 | 14404 | 18204 | 0.36 | 0.45 |
15:57:12 | 0 | 0 | 0 | 0 | 325328 | 325328 | 14404 | 14404 | 0.36 | 0.36 |
15:57:13 | 0 | 0 | 0 | 0 | 325328 | 325328 | 14404 | 14404 | 0.36 | 0.36 |
Killing the loadbalancer
Running kubectl delete pod on the nginx-ilb pod, the running pod is in a terminating state for ~30 seconds. During this time, the replication controller creates a new pod, but it remains in a pending state for the 30 second period. Some responses are handled, but there is the risk of ~30 seconds of downtime between pod restarts. This may be related to the shutdown of the default-http-backend, but this isn't clear.