Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Prototype status:

  • Working nginx LB with kubernetes ingress controller integration
  • LB runs under kubernetes as a system-service
  • Instructions/test harnesses in
  • The LB is unopinionated - it works at the system level with any K8s service, as long as the service conforms to standard K8s network model.   The requirements below are specific to NDSLabs test-drive/workbench but the LB is general-purpose and supportive of test-drive/workbench if test-drive/workbench are standard K8s services - assumed to be true.
  • Tested with Vhost and path routing - basic testing not thorough
  • Ingress interface based on K8s 1.2.0-alpha release - needs update
  • Vhost/path routing verified

...

  • Load balancer node: A VM node will serve as the dedicated load-balancer node and run the Nginx LB replication controller using node labels
  • Nginx ingress controller: The nginx ingress controller is deployed as a replication controller
  • DNS:
    • "A" record points to load balancer node (e.g., test.ndslabs.org A 141.142.210.172)
    • Per-project cluster wildcard CNAME (e.g., "*.demotest.ndslabs.org. CNAME test.ndslabs.org)
  • Per-service Ingress resource:  
    • For each exposed service endpoint, an ingress rule will be created 
      • host: <service>.<stack-service-id>-<namespace>.ndslabs.org
      • path: "/"
      • backend:
        • serviceName: <service name>
        • servicePort: <service port>
    • These resources will be created/updated/deleted with the associated service
    • The <service> value in the host will be the stack service ID (e.g., srz4wj-clowder)
  • GUI/CLI: Instead of NodePort URLs, change to use the LB URL
  • Endpoints:
    • For single-node and development instances, use NodePort
    • For multi-node cluster, use LB
  • TLS: 
    • Wildcard certificate for each cluster (*.test.ndslabs.org)
    TLS:  Add TLS termination support
  • TCP support:
    • The nginx controller supports access to TCP services using the ConfigMap resource. ConfigMap is simply a map of keys/values that contains the exposed port and the namespace/service:port. We will need to update the ConfigMap when services are added and removed.  We will also need to handle assignment of ports. Unfortunately, the port assignments appear to be system-wide.  It might be nice if we could assign ports within a host (i.e., in the Ingress rules), but this isn't possible today.

API Server

The API server will need to know the following:

  • Path to TLS cert/key
  • Cluster domain name
  • Whether to use NodePort or LoadBalancer
  • Label for compute nodes (NDS-264)