Load Balancer
This section documents the results of NDS-239 - Getting issue details... STATUS . The goal of this ticket was to determine whether the Nginx ingress controller would be a performance bottleneck for the NDS Labs system.
Baseline service: Nginx
Load generation: boom
Use the boom load test generator to scale up concurrent requests on a Nebula m1.medium VM:
for i in `seq 1 10` do req=$((100*$i)) echo "bin/boom -cpus 4 -n 1000 -c $req http://perf-nginx.iassist.ndslabs.org/" bin/boom -cpus 4 -n 1000 -c $req http://perf-nginx.iassist.ndslabs.org/ sleep 1 done
Measuring latency and resource usage
Measuring latency: boom
Boom produces response time output, for example
bin/boom -cpus 4 -n 1000 -c 500 http://perf-nginx.iassist.ndslabs.org/ Summary: Total: 0.1539 secs Slowest: 0.1335 secs Fastest: 0.0193 secs Average: 0.0685 secs Requests/sec: 4842.2840 Status code distribution: [200] 745 responses Response time histogram: 0.019 [1] | 0.031 [28] |∎∎∎∎∎∎ 0.042 [110] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎ 0.054 [69] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎ 0.065 [161] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎ 0.076 [157] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎ 0.088 [60] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎ 0.099 [38] |∎∎∎∎∎∎∎∎∎ 0.111 [49] |∎∎∎∎∎∎∎∎∎∎∎∎ 0.122 [37] |∎∎∎∎∎∎∎∎∎ 0.134 [35] |∎∎∎∎∎∎∎∎ Latency distribution: 10% in 0.0394 secs 25% in 0.0502 secs 50% in 0.0652 secs 75% in 0.0808 secs 90% in 0.1103 secs 95% in 0.1217 secs 99% in 0.1293 secs
Measuring latency: netperf
Measure latency and throughput to services inside kubernetes
Measuring CPU/Memory/IO utilization
Results
%usr | %system | %guest | %CPU | |||||
proc1 | proc2 | proc1 | proc2 | proc1 | proc2 | proc1 | proc2 | |
15:56:10 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
15:56:11 | 0 | 6 | 0 | 6 | 0 | 0 | 0 | 12 |
15:56:12 | 3 | 0 | 3 | 0 | 0 | 0 | 6 | 0 |
15:56:13 | 3 | 0 | 3 | 0 | 0 | 0 | 6 | 0 |
15:56:14 | 5 | 0 | 5 | 0 | 0 | 0 | 10 | 0 |
15:56:15 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 |
15:56:16 | 4 | 0 | 4 | 0 | 0 | 0 | 8 | 0 |
15:56:17 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 |
15:56:18 | 5 | 0 | 6 | 0 | 0 | 0 | 11 | 0 |
15:56:19 | 1 | 0 | 1 | 0 | 0 | 0 | 2 | 0 |
15:56:20 | 2 | 0 | 4 | 0 | 0 | 0 | 6 | 0 |
15:56:21 | 1 | 0 | 1 | 0 | 0 | 0 | 2 | 0 |
15:56:22 | 3 | 0 | 4 | 0 | 0 | 0 | 7 | 0 |
15:56:23 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15:56:24 | 0 | 4 | 0 | 5 | 0 | 0 | 0 | 9 |
15:56:25 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 2 |
15:56:26 | 0 | 5 | 1 | 6 | 0 | 0 | 1 | 11 |
15:56:27 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
15:56:28 | 4 | 0 | 6 | 0 | 0 | 0 | 10 | 0 |
15:56:29 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 |
15:56:30 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Concurrent connections
Scaling services
Large-file upload/download
Killing the loadbalancer