Table of Contents |
---|
Prototype status:
- Working ngingx nginx LB with kubernetes ingress controller integration
- LB runs under kubernetes as a system-service
- Instructions/test harnesses in
- The LB is unopinionated - it works at the system level with any K8s service, as long as the service conforms to standard K8s network model. The requirements below are specific to NDSLabs test-drive/workbench but the LB is general-purpose and supportive of test-drive/workbench if test-drive/workbench are standard K8s services - assumed to be true.
- Tested with Vhost and path routing - basic testing not thorough
- Ingress interface based on K8s 1.2.0-alpha release - needs update
- Vhost/path routing verified
Tasks required for Production Deployments:
...
- Test LB prototype with test-drive interfaces/specs - path based for odum
Jira server JIRA serverId b14d4ad9-eb00-3a94-88ac-a843fb6fa1ca key NDS-239
...
- Update go dependencies/ingress API to current production release of kubernetes -
currently based on 1.2.0-alpha, current 160502 is 1.2.3
...
- , should evaluate diff of 1.2.3 and 1.3.0-alpha and pick appropriately for future
...
Jira server JIRA serverId b14d4ad9-eb00-3a94-88ac-a843fb6fa1ca key NDS-240
- Update the load balancer build - go build produces a static binary.
...
- Build should produce image from alpine with net-tools and single static binary.
...
Info on golang:onbuilds is here: https://hub.docker.com/_/golang/Jira server JIRA serverId b14d4ad9-eb00-3a94-88ac-a843fb6fa1ca key NDS-241
- Addressing startup
- Label the LB node such that LB pod deploys there, and add anti-affinity to label/scheduler/system to avoid
...
- scheduling other pods on the LB node.
i.e. The ingress-lb should be the only thing running on the LB node - always -
Jira server JIRA serverId b14d4ad9-eb00-3a94-88ac-a843fb6fa1ca key NDS-242
- scheduling other pods on the LB node.
Background
The NDS Labs "Workbench" service provides NDSC stakeholders with the ability to quickly launch and explore a variety of data management tools. Users select from a list of available services to configure and deploy instances of them. Workbench "services" are composed of a set of integrated Docker containers all deployed on a Kubernetes cluster.
...
- When a new project is created, if the admin anticipates needing remote access to non-HTTP services, a static IP address and CNAME are assigned to the project.
- The load balancer routes requests to services configured in Kubernetes. This means that the LB must be Namespace and service aware – which means monitoring Etcd or the Kubernetes API for changes.
- When a new HTTP service is added, load balancer config is updated to proxy via path
- If no CNAME
- paths are in the form: labs.nds.org/namespace/serviceId
- If CNAME
- paths are in the form namespace.labs.nds.org/serviceId
- If no CNAME
- When a new TCP service is added, load balancer config is updated to proxy via port – only if project has CNAME/IP:
- namespace.labs.nds.org:port
- For GUI and API, paths are labs.nds.org/ labs.nds.org/api respectively
- Load balancer must be resilient – if restarted, previous configuration is maintained. Possibly in failover configuration.
Preliminary Design
Based on the prototype, we will move forward with the Kubernetes ingress-based nginx load balancer model. The current version from the Kubernetes contrib repo works based on preliminary tests.
- Load balancer node: A VM node will serve as the dedicated load-balancer node and run the Nginx LB replication controller using node labels
- Nginx ingress controller: The nginx ingress controller is deployed as a replication controller
- DNS:
- "A" record points to load balancer node (e.g., test.ndslabs.org A 141.142.210.172)
- Per-cluster wildcard CNAME (e.g., "*.test.ndslabs.org. CNAME test.ndslabs.org)
- Per-service Ingress resource:
- For each exposed service endpoint, an ingress rule will be created
- host: <stack-service-id>-<namespace>.ndslabs.org
- path: "/"
- backend:
- serviceName: <service name>
- servicePort: <service port>
- These resources will be created/updated/deleted with the associated service
- The <service> value in the host will be the stack service ID (e.g., srz4wj-clowder)
- For each exposed service endpoint, an ingress rule will be created
- Endpoints:
- For single-node and development instances, use NodePort
- For multi-node cluster, use LB
- TLS:
- Wildcard certificate for each cluster (*.test.ndslabs.org)
- TCP support:
- The nginx controller supports access to TCP services using the ConfigMap resource. ConfigMap is simply a map of keys/values that contains the exposed port and the namespace/service:port. We will need to update the ConfigMap when services are added and removed. We will also need to handle assignment of ports. Unfortunately, the port assignments appear to be system-wide. It might be nice if we could assign ports within a host (i.e., in the Ingress rules), but this isn't possible today.
API Server
With the implementation of
Jira | ||||||||
---|---|---|---|---|---|---|---|---|
|
- A system-wide secret named "ndslabs-tls-secret" on the default namespace that contains the TLS certificate and key. If this secret exists, then the associated TLS cert/key are copied to per-project secrets ("<namespace>-tls-secret) during PostProject
- At startup, the API Server supports two "Ingress" types: NodePort or LoadBalancer, configurable in apiserver.conf or, for the docker image, via the INGRESS environment variable.
- If Ingress=LoadBalancer
- An ingress resource is created for every service with access=external during the stack startup
- The ingress resource consists of
- TLS via named secret
- Host in the form <stack-service-id>.domain
- Endpoint objects contain the ingress host
- If Ingress=NodePort
- Ingress rules are not created for stack services
- Endpoint objects retain the previous NodePort and protocol information
- The API Server has also been changed to support a configurable Domain (apiserver.conf) or (DOMAIN env in docker). This domain is used to construct the ingress rules.
- Pods have been modified to include ndslabs-role=compute (NDS-264)