Table of Contents |
---|
Prototype status:
- Working nginx LB with kubernetes ingress controller integration
- LB runs under kubernetes as a system-service
- Instructions/test harnesses in
- The LB is unopinionated - it works at the system level with any K8s service, as long as the service conforms to standard K8s network model. The requirements below are specific to NDSLabs test-drive/workbench but the LB is general-purpose and supportive of test-drive/workbench if test-drive/workbench are standard K8s services - assumed to be true.
- Tested with Vhost and path routing - basic testing not thorough
- Ingress interface based on K8s 1.2.0-alpha release - needs update
- Vhost/path routing verified
...
- Load balancer node: A VM node will serve as the dedicated load-balancer node and run the Nginx LB replication controller using node labels
- Nginx ingress controller: The nginx ingress controller is deployed as a replication controller
- DNS:
- "A" record points to load balancer node (e.g., test.ndslabs.org A 141.142.210.172)
- Per-project cluster wildcard CNAME (e.g., "*.demotest.ndslabs.org. CNAME test.ndslabs.org)
- Per-service Ingress resource:
- For each exposed service endpoint, an ingress rule will be created
- host: <service>.<stack-service-id>-<namespace>.ndslabs.org
- path: "/"
- backend:
- serviceName: <service name>
- servicePort: <service port>
- These resources will be created/updated/deleted with the associated service
- The <service> value in the host will be the stack service ID (e.g., srz4wj-clowder)
- For each exposed service endpoint, an ingress rule will be created
- GUI/CLI: Instead of NodePort URLs, change to use the LB URL
- Endpoints:
- For single-node and development instances, use NodePort
- For multi-node cluster, use LB
- TLS:
- Wildcard certificate for each cluster (*.test.ndslabs.org)
- TCP support:
- The nginx controller supports access to TCP services using the ConfigMap resource. ConfigMap is simply a map of keys/values that contains the exposed port and the namespace/service:port. We will need to update the ConfigMap when services are added and removed. We will also need to handle assignment of ports. Unfortunately, the port assignments appear to be system-wide. It might be nice if we could assign ports within a host (i.e., in the Ingress rules), but this isn't possible today.
API Server
With the implementation of
Jira | ||||||||
---|---|---|---|---|---|---|---|---|
|
- A system-wide secret named "ndslabs-tls-secret" on the default namespace that contains the TLS certificate and key. If this secret exists, then the associated TLS cert/key are copied to per-project secrets ("<namespace>-tls-secret) during PostProject
- At startup, the API Server supports two "Ingress" types: NodePort or LoadBalancer, configurable in apiserver.conf or, for the docker image, via the INGRESS environment variable.
- If Ingress=LoadBalancer
- An ingress resource is created for every service with access=external during the stack startup
- The ingress resource consists of
- TLS via named secret
- Host in the form <stack-service-id>.domain
- Endpoint objects contain the ingress host
- If Ingress=NodePort
- Ingress rules are not created for stack services
- Endpoint objects retain the previous NodePort and protocol information
- The API Server has also been changed to support a configurable Domain (apiserver.conf) or (DOMAIN env in docker). This domain is used to construct the ingress rules.
- Pods have been modified to include ndslabs-role=compute (NDS-264)