You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Next »

Background

The NDS Labs "Workbench" service provides NDSC stakeholders with the ability to quickly launch and explore a variety of data management tools. Users select from a list of available services to configure and deploy instances of them. Workbench "services" are composed of a set of integrated Docker containers all deployed on a Kubernetes cluster. 

In the following screenshot the user has starts a single instance of Dataverse which includes containers running Glassfish, PostgreSQL, Solr, Rserve, and TwoRavens (Apache + R). The user is attached to a Kubernetes namespace and can start instances of multiple different services:

 

Currently, remote access to running services is implemented using the Kubernetes "NodePort" mechanism. In essence, a given service (e.g., webserver) is mapped to a cluster-wide port in some configured range (default 30000-32767).  Remote users access the running service on the specified port.  In the above screenshot, the Dataverse web interface is accessible via http://141.142.210.130:30233. This solution has worked for development purposes but is 1) not scalable and 2) difficult to secure.   We are exploring options to provide a scalable and secure solution to providing access to running services in the NDS Labs workbench for 10's-100's of users working with multiple instances of services, i.e. hundreds of service endpoints.

Requirements

Use case:  The workbench user (project administrator) configures a service via the the workbench.  Once configured, external endpoints are accessible via TLS/SSL. 

  • Ability for the user to securely access NDS Labs workbench services, which include web-based HTTP and TCP services.
  • Service endpoints are secured using TLS
  • Special handling for NDS Labs workbench API server and GUI requests, including CORS support
  • Resilient to failure

Options:

OptionDescriptionProCon
Path based

Services accessed via URL + Path

For example, labs.nds.org/namespace/dataverse

Single DNS entry for labs.nds.org

Single SSL certificate

Simple

Only supports HTTP-based services

Requires that every deployed service support a context or load balancer must re-write requests.

Port based

Services accessed via URL + port

For example

labs.nds.org:33333

Single DNS entry for labs.nds.org

Single SSL certificate

Simple

Requires use of non-standard ports

Possible collisions in ports if services are stopped and started across projects (i.e., I stop my stack, port is free – you start your stack and are assigned my port, my users now access your service)

Only scales to # ports

CNAME

Services accessed via CNAME URL + Path or Port

for example

project.labs.nds.org/dataverse

project.labs.nds.org:port

One DNS entry, IP address, and certificate for each project (or possibly wildcard Cert)

Supports both HTTP and TCP services

Port collisions are only within a project.

Requires IP address per project

Requires DNS/CNAME request to neteng

 

Requirements

  • When a new project is created, if the admin anticipates needing remote access to non-HTTP services, a static IP address and CNAME are assigned to the project.
  • The load balancer routes requests to services configured in Kubernetes.  This means that the LB must be Namespace and service aware – which means monitoring Etcd or the Kubernetes API for changes.
  • When a new HTTP service is added, load balancer config is updated to proxy via path
    • If no CNAME
      • paths are in the form: labs.nds.org/namespace/serviceId
    • If CNAME
      • paths are in the form namespace.labs.nds.org/serviceId
  • When a new TCP service is added, load balancer config is updated to proxy via port – only if project has CNAME/IP:
    • namespace.labs.nds.org:port
  • For GUI and API, paths are labs.nds.org/ labs.nds.org/api respectively
  • Load balancer must be resilient – if restarted, previous configuration is maintained.  Possibly in failover configuration.
  • No labels