Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Current features/components

Deployment (OpenStack)

We currently have two methods of deploying the NDS Labs service: 1) ndslabs-startup (single node) and 2) deploy-tools (multi-node OpenStack)

The ndslabs-startup repo provides a set of scripts to deploy NDS Labs services to a single VM. This is intended primarily for development and testing. The deployment is incomplete (no shared storage, NRPE, LMA, backup), but adding these services would be minor.

The deploy-tools repo/image provides a set of Ansible scripts specifically to support the provisioning and deployment of a Kubernetes cluster on OpenStack with a hard dependencies on CoreOS and GlusterFS. It's unclear whether this can be replaced by openstack-heat.

For commercial cloud providers, we cannot use our deployment process. Fortunately, these services already have the ability to provision Kubernetes clusters:  AWSAzure, and GCE.

Minikube was considered as an option, but is problematic when running on VM.

CoreOS (Operating system)

The current system relies on CoreOS. This choice is arbitrary, but there are many assumptions in the deploy-tools component that are bound to the OS choice. Different providers make different OS decisions.  Kubernetes seems to lean toward Fedora and Debian. GCE itself is Debian. Azure Ubuntu, etc.

Docker (Container)

The Labs Workbench system assumes Docker, but there are other container options. Kubernetes also supports rkt. This is something we've never explored.

Orchestration (Kubernetes)

Labs Workbench relies heavily on Kubernetes itself. The API server integrates directly with the Kubernetes API. Of all basic requirements, this seems to be one that's unlikely to change.

Gluster FS (Storage)

Labs Workench uses a custom Gluster FS solution for shared storage.  A single Gluster volume is provisioned (4 GFS servers) and mounted to each host. The shared volume is accessed via hostPath by containers. 

This approach was necessary due to lack of support for persistent volume claims for OpenStack.  For commercial cloud providers, we'll need to re-think this approach. We can either have a single volume claim (giant shared disk), volume claim per user, or volume claim per application. There are benefits/weaknesses in all of these approaches.  For example, in a cloud provider, you don't want to have a giant provisioned disk with no usage. The per account approach may be better.

REST API Server

Web UI

Ingress Controller

Backup

Monitoring 

Application Catalog

Development/Analysis Environments

Phone home

  • CoreOS
  • Docker
  • Kubernetes
  • Gluster
  • Deploy tools/NDS Labs Startup
  • API Server
  • Web UI
  • Ingress controller
  • Backup
  • Monitoring/NRPE
  • Specs
  • Development environments.
  • Docker registry cache