...
This section documents the results of
Jira | ||||||
---|---|---|---|---|---|---|
|
Cluster configuration
The cluster used for testing is the original IASSIST cluster. This uses an m1.medium (2CPU, 4G) for the loadbalancer and 4 compute nodes.
Baseline service: Nginx
This test uses the nginx-ingress-controller as the loadbalancer and a simple Nginx webserver as the backend service. An ingress rule was created manually to map perf-nginx.cluster.ndslabs.org to the backend service.
Load generation: boom
Use the boom load test generator to scale up concurrent requests on using a Nebula m1.medium VM:large VM (8 VCPUs). The following script calls boom with increasing number of concurrent requests (-c in 100:1000) while also increasing the number of total requests (-n in 1000:10000).
Code Block |
---|
for i in `seq 1 10` do con=$((100*$i)) req=$((1001000*$i)) echo "bin/boom -cpus 4 -n 1000$req -c $req$con http://perf-nginx.iassist.ndslabs.org/" bin/boom -cpus 4 -n 1000$req -c $req$con http://perf-nginx.iassist.ndslabs.org/ sleep 1 done |
Measuring latency and resource usage
Measuring latency: boom
Boom The boom utility produces response time output , for exampleincluding a summary of the average response time for each request as well as the distribution of response times and latency.
Code Block |
---|
bin/boom -cpus 4 -n 10008000 -c 500800 http://perf-nginx.iassist.ndslabs.org/ Summary: Total: 03.15394305 secs Slowest: 03.13350162 secs Fastest: 0.01930009 secs Average: 0.06851335 secs Requests/sec: 48422332.28400068 Status code distribution: [200] 7458000 responses Response time histogram: 0.019001 [1] | 0.031302 [287093] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎ 0.042604 [110371] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎ 0.054906 [694] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎ 01.065207 [161471] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎ 01.076509 [15728] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎ 01.088810 [600] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎ 02.099112 [380] |∎∎∎∎∎∎∎∎∎ 02.111413 [490] |∎∎∎∎∎∎∎∎∎∎∎∎ 02.122715 [370] |∎∎∎∎∎∎∎∎∎ 03.134016 [3532] |∎∎∎∎∎∎∎∎ Latency distribution: 10% in 0.03940111 secs 25% in 0.05020183 secs 50% in 0.06520305 secs 75% in 0.08080554 secs 90% in 0.11033304 secs 95% in 01.12170200 secs 99% in 01.1293 secs |
Measuring latency: netperf
Measure latency and throughput to services inside kubernetes
Measuring CPU/Memory/IO utilization
Results
Concurrent connections
Scaling services
Large-file upload/download
0767 sec |
Below is a plot of average response time with increasing concurrent requests (-n 1000 requests) and replicas. Average response times increase as the number of concurrent requests increase, but still remain below 1 second. Adding more replicas does not have an apparent effect, suggesting that the response time is related to the ingress load-balancer, not the backend service.
Below is a plot of the latency distribution at 25%, 50%, 75%, and 95% of requests with increasing concurrent connections. So, up to 1000 concurrent connections, 75% of requests have latency < 0.1 seconds. Starting around 200 concurrent requests, 5% of requests have increasing latency – up to 1 second.
Measuring CPU/Memory utilization
Memory and CPU utilization was measured using pidstat. The nginx ingress controller has two worker threads in this test, labeled as proc1 and proc2 (process).
CPU utilization
The following table reports CPU utilization for each process during the boom test. %CPU peaks at 12%.
%usr | %system | %guest | %CPU | |||||
proc1 | proc2 | proc1 | proc2 | proc1 | proc2 | proc1 | proc2 | |
15:56:10 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
15:56:11 | 0 | 6 | 0 | 6 | 0 | 0 | 0 | 12 |
15:56:12 | 3 | 0 | 3 | 0 | 0 | 0 | 6 | 0 |
15:56:13 | 3 | 0 | 3 | 0 | 0 | 0 | 6 | 0 |
15:56:14 | 5 | 0 | 5 | 0 | 0 | 0 | 10 | 0 |
15:56:15 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 |
15:56:16 | 4 | 0 | 4 | 0 | 0 | 0 | 8 | 0 |
15:56:17 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 |
15:56:18 | 5 | 0 | 6 | 0 | 0 | 0 | 11 | 0 |
15:56:19 | 1 | 0 | 1 | 0 | 0 | 0 | 2 | 0 |
15:56:20 | 2 | 0 | 4 | 0 | 0 | 0 | 6 | 0 |
15:56:21 | 1 | 0 | 1 | 0 | 0 | 0 | 2 | 0 |
15:56:22 | 3 | 0 | 4 | 0 | 0 | 0 | 7 | 0 |
15:56:23 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15:56:24 | 0 | 4 | 0 | 5 | 0 | 0 | 0 | 9 |
15:56:25 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 2 |
15:56:26 | 0 | 5 | 1 | 6 | 0 | 0 | 1 | 11 |
15:56:27 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
15:56:28 | 4 | 0 | 6 | 0 | 0 | 0 | 10 | 0 |
15:56:29 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 |
15:56:30 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Memory utilization
The following table reports memory utilization for each process during the boom test. %MEM remains relatively stable throughout the test.
minflt/s | majflt/s | VSZ | RSS | %MEM | ||||||
proc1 | proc2 | proc1 | proc2 | proc1 | proc2 | proc1 | proc2 | proc1 | proc2 | |
15:56:52 | 0 | 0 | 0 | 0 | 326132 | 325992 | 15208 | 15068 | 0.38 | 0.37 |
15:56:53 | 0 | 0 | 0 | 0 | 326132 | 325992 | 15208 | 15068 | 0.38 | 0.37 |
15:56:54 | 3 | 0 | 0 | 0 | 326132 | 325992 | 15208 | 15068 | 0.38 | 0.37 |
15:56:55 | 29 | 299 | 0 | 0 | 325328 | 325992 | 14404 | 15068 | 0.36 | 0.37 |
15:56:56 | 0 | 477 | 0 | 0 | 325328 | 327576 | 14404 | 16360 | 0.36 | 0.4 |
15:56:57 | 0 | 0 | 0 | 0 | 325328 | 325768 | 14404 | 14844 | 0.36 | 0.37 |
15:56:58 | 0 | 648 | 0 | 0 | 325328 | 328416 | 14404 | 17216 | 0.36 | 0.42 |
15:56:59 | 0 | 0 | 0 | 0 | 325328 | 325328 | 14404 | 14404 | 0.36 | 0.36 |
15:57:00 | 0 | 1021 | 0 | 0 | 325328 | 329420 | 14404 | 18360 | 0.36 | 0.45 |
15:57:01 | 0 | 0 | 0 | 0 | 325328 | 326140 | 14404 | 15216 | 0.36 | 0.38 |
15:57:02 | 0 | 0 | 0 | 0 | 325328 | 326140 | 14404 | 15216 | 0.36 | 0.38 |
15:57:03 | 0 | 630 | 0 | 0 | 325328 | 326764 | 14404 | 15840 | 0.36 | 0.39 |
15:57:04 | 0 | 0 | 0 | 0 | 325328 | 325808 | 14404 | 14884 | 0.36 | 0.37 |
15:57:05 | 0 | 1002 | 0 | 0 | 325328 | 329908 | 14404 | 18840 | 0.36 | 0.46 |
15:57:06 | 0 | 47 | 0 | 0 | 325328 | 325628 | 14404 | 14704 | 0.36 | 0.36 |
15:57:07 | 1 | 1275 | 0 | 0 | 325328 | 330784 | 14404 | 19716 | 0.36 | 0.49 |
15:57:08 | 0 | 0 | 0 | 0 | 325328 | 325884 | 14404 | 14960 | 0.36 | 0.37 |
15:57:09 | 0 | 1502 | 0 | 0 | 325328 | 331960 | 14404 | 20756 | 0.36 | 0.51 |
15:57:10 | 0 | 0 | 0 | 0 | 325328 | 325328 | 14404 | 14404 | 0.36 | 0.36 |
15:57:11 | 0 | 1258 | 0 | 0 | 325328 | 329128 | 14404 | 18204 | 0.36 | 0.45 |
15:57:12 | 0 | 0 | 0 | 0 | 325328 | 325328 | 14404 | 14404 | 0.36 | 0.36 |
15:57:13 | 0 | 0 | 0 | 0 | 325328 | 325328 | 14404 | 14404 | 0.36 | 0.36 |
Killing the loadbalancer
Running kubectl delete pod on the nginx-ilb pod, the running pod is in a terminating state for ~30 seconds. During this time, the replication controller creates a new pod, but it remains in a pending state for the 30 second period. Some responses are handled, but there is the risk of ~30 seconds of downtime between pod restarts. This may be related to the shutdown of the default-http-backend, but this isn't clear.
...