...
Minimal Gluster configuration: simple replication? (instead of 2x2 striped and replicated)
- 3 node system based on latest K8 autoscaler
- Key issue is scalability -- start as small as possible, grow and shrink as needed. For Workbench
- Compute nodes (can be accomplished with cluster autoscaler)
- Storage (should be put in the cluster as a self-hosting implementation)
- Don't provision volumes when provisioning hardware
- Storage is a separate issue
- Basic process: provision servers:
- etcd/master; 2 workers (compute nodes)
- Why 3?
- kubeadm does 3-node etcd in cluster
- For scale up/scale down, you need scale up/down
- Less about scaling, more about maintenance (need to cordon/drain)
- Change from 2-2 gluster to 2 replicas
- Make the cluster first, then add the storage
- Storage self-hosting implementation
- Think about our gluster pods -- they assume someone has provisioned volumes
- You don't need to do that at provisioning time
- Get read/write/one from Cloud provider
- "Brick pod" -- fires up, gets an OpenStack claim
- Run Heketi in cluster -- does automation forming into mountable/PVC controller
- Can ask for claims from Heketi
- Heketi may provide PVC for Gluster over OpenStack
- As admin, "I want to create a big Gluster volume called global"
- Explore auto-scaler
- Special-purpose nodes not required?
- What about NFS?
- What about TERRA/HyperKube?
- Running in containers on local docker seems dangerous
- Docker updates break; have everything break on one day when new Docker loaded
- VMs allow you to control Docker version -- you don't have control. Don't update CoreOS
- Not encapsulated well -- vagrant does it
- Can't just change from CoreOS to Fedora/Ubuntu
Mike Lambert
Option 1: Hyperkube
...