You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

 

Use cases

Labs Workbench Beta

The Labs Workbench beta instance deployed at SDSC is currently configured with 11 instances, 28 vcpus, 196GB RAM, and 1.2 TB storage. This includes ops instance, master, loadbalancer, LMA, two compute nodes, 4 GFS servers, monitoring and backup. Using UC internal pricing, this service will cost an estimate ~$2,100 per month or ~$25,000 per year to run at SDSC under the current configuration.  

  • How can we minimize resource requirements and cost while maximizing elasticity and reliability?
  • Can we support deployment into other environments (e.g., AWS/GCE)?  
  • How can we support deploying labs on existing Kubernetes installations?

TERRA-REF

The TERRA-REF project manages 200TB of data on the ROGER system. Data is accessible via Globus, Clowder, GPFS mount to ROGER batch compute resources, and NFS mount to ROGER/Nebula OpenStack resources.  The TERRA-REF project includes a "tool launcher" service that supports launching Docker containers to analyze data via the Clowder API and mounted directly via NFS.  The "tool launcher" is being replaced by Labs Workbench.  Labs Workbench was also used to support the TERRA-REF Phenome 2017 workshop. On an average day, Labs Workbench will need to support ~5-10 active users.  During workshops, Labs Workbench will need to support ~30 simultaneous users. 

  • How can we minimize resource requirements and cost while maximizing elasticity and reliability?

Existing Kubernetes Deployments

Users have the option to deploy Kubernetes in a variety of environments: https://kubernetes.io/docs/getting-started-guides/

  • Minikube/Ubuntu LXD
  • Hosted solutions: 
    • GCE
    • AWS
    • Azure Container Service
  • Local solutions
    • kubeadm
    • OpenStack Heat
  • Can/should we support any of these options? What changes would be required to do so?

Tasks

  1. Define minimal cluster configuration for current Labs Workbench with ability to scale up/down to support TERRA-REF use case.
  2. Explore deployment of Labs Workbench on GCE.  What changes are required?
  3. Explore deployment of Labs Workbench on AWS. What changes are required? 
  4. Explore deployment of Labs Workbench on system deployed with kubeadm. What changes are required?
  5. Explore latest options for deploying Kubernetes on OpenStack. What methods would we recommend? Explore deploying Labs Workbench on these environments. What changes would be required?
  • No labels