Craig Willis
For the TERRA-REF project, I'm currently using Hyperkube. I did this because I want a minimal install with minimal OpenStack resources (cost/complexity). This is not just a developer instance, it will run as the primary instance for the project as a replacement for the toolserver. I think we can define a process for migrating from single-node to multi-node through etcd export and filesystem copy.
For a non-Hyperkube installation that scales, the minimal configuration is:
* 1 master/controller, 1 compute
I'd like to see us handle storage separately. If the project wants home directory, they can deploy NFS or Gluster, either an existing node or deploying new NFS/GFS instances.
David Raila
For a non-Hyperkube installation that scales, the minimal configuration is:
* 1 master/controller, 1 compute
Minimal Gluster configuration: simple replication? (instead of 2x2 striped and replicated)
- 3 node system based on latest K8 autoscaler
- Key issue is scalability -- start as small as possible, grow and shrink as needed. For Workbench
- Compute nodes (can be accomplished with cluster autoscaler)
- Storage (should be put in the cluster as a self-hosting implementation)
- Don't provision volumes when provisioning hardware
- Storage is a separate issue
- Basic process: provision servers:
- etcd/master; 2 workers (compute nodes)
- Why 3?
- kubeadm does 3-node etcd in cluster
- For scale up/scale down, you need scale up/down
- Less about scaling, more about maintenance (need to cordon/drain)
- Change from 2-2 gluster to 2 replicas
- Make the cluster first, then add the storage
- Storage self-hosting implementation
- Think about our gluster pods -- they assume someone has provisioned volumes
- You don't need to do that at provisioning time
- Get read/write/one from Cloud provider
- "Brick pod" -- fires up, gets an OpenStack claim
- Run Heketi in cluster -- does automation forming into mountable/PVC controller
- Can ask for claims from Heketi
- Heketi may provide PVC for Gluster over OpenStack
- As admin, "I want to create a big Gluster volume called global"
- Explore auto-scaler
- Special-purpose nodes not required?
- What about NFS?
- What about TERRA/HyperKube?
- Running in containers on local docker seems dangerous
- Docker updates break; have everything break on one day when new Docker loaded
- VMs allow you to control Docker version -- you don't have control. Don't update CoreOS
- Not encapsulated well -- vagrant does it
- Can't just change from CoreOS to Fedora/Ubuntu
Mike Lambert
Option 1: Hyperkube
- Minimal configuration: single-node
- Optional one-way migration path to scalable multi-node cluster
- Recommended for single-tenant / small-scale testing
- Does not support scaling up
Use Cases supported:
- Platform Exploration / Customization / Extension
- Development / Testing of Labs Workbench Services
Option 2: Ansible
- Minimal configuration: 1 master + 1 compute
- Optional: dedicated loadbalancer node
- Optional: dedicated glfs nodes
- Optional: dedicated LMA node
- Recommended for multiple tenants / large-scale testing or production use
- Supports scaling up and down, as needed
Uses:
- Workshops, hackathons, demos, etc
- Production use, defined as: "One or more real users are consuming the system or API"