5 minutes to a running multi-node cluster that can run in your laptop. VBox machines running CoreOS on host-local networking, not exposed to internet. Notes below for other options.
Here is the recipe, adapted from the official instructions here: https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html
$update_channel="alpha"
$controller_count=1
$controller_vm_memory=512
$worker_count=2
$worker_vm_memory=1024
$etcd_count=1
$etcd_vm_memory=512
vagrant up --provider virtualbox
export KUBECONFIG="${KUBECONFIG}:$(pwd)/kubeconfig"
Alt: Add the config to any existing ~/.kube/config if you are working with multiple clusters and or namespaces (kubectl set-cluster, kubectl set-context, etc.)
Should be operational, some useful commands (use in the multinode/vagrant path)
vagrant ssh c1 (or the other names) to get into coreos
vagrant status
vagrant suspend/resume
Once in a while net doesn't come back, and kubectl says:
The connection to the server 172.17.4.101:443 was refused - did you specify the right host or port?
In this case you can vagrant halt, vagrant up
vagrant destroy (obliterates)
Advantages to local container options:
DevSystem | Advantage | Impact | Notes |
---|---|---|---|
Hyperkube | Single dependency | Low | docker install vs Vagrant and VirtualBox |
Efficiency | Low for development class systems | Container-based is more eficient in terms of resources. VM's are always overprovisioned | |
Vagrant | Familiarity with process | Moderate | Use of VM's by developers is a common development model/process that's widely understood Known familiar model is easy to adopt and faster to get started - more self-serving w.r.t. problems. |
Isolation between instances and from developer system: better stability, reproducability, easier support | High | No entertwining with local docker environment - avoids widespread docker troubles/inconsistencies/breaking updates easier cleanup/reinit of environments without local docker cleanups Overloading Docker will not kill developers system - observed problem in NDS Avoids potential collision of WB images and other local docker images unrelated to WB devenv. | |
More accurate w.r.t. hosted and self-hosted production environments | Low - App Development High - Systems Development | True emulation of multi-machine deplolyment, true multi-network capabilities with routing, true ingress without hostport collision, true operations emulation of machine failure/upgrade/maintenance, ability to work with node device-layer storage. | |
Flexibility | Low - App Development High - Systems Development | Changeout capabilities: network layer, docker layer, OS layer Can experiment with different OS and/or multiple OS on different machines in single cluster Complex developments can be exported/shared and disseminated for evaluation and/or group development purposes. Accelerates development in comparison to the starting-from-ground-zero/replay model and allows more involved experimentation with complex environments. Can support addition non-cluster vm's with tooling/ops ennvironments - like the gcloud shell Supports low-level OS and networking debug tools that are | |
Supportability | Very High and potential non-value-add timesink | Uniform support through VM image gives full intentional control of configurations and upgrades - this is a huge ongoing issue in the k8s community for a reason (not simply an issue of opinionation) docker change has been an enormous support timesink - and NDS has direct experience on multiple occasions with this - avoid docker update dependency if possible. Self-test by devs having trouble: stop the trouble env, start a new one to test basic functionality and A/B test to discover where things went wrong State of devenv is tranferrable - can checkpoint the VMs and share the state with support to reproduce troublesome/important issues - yes semi large-files (but compare to jupyter pull?), but easy in comparison to trying to share a hyperkube environment. |
Can be used for volume/fs work by adding VBox disks to nodes. Kubernetes doesn't have a VBox/vagrant volume driver (yet), so you get raw disk on nodes like we have in OpenStack.
Can be used for ingress/network by adding additional interface to the node to be LB.
It can be left running, or vagrant suspend/resume can freeze/thaw the whole cluster in a few seconds if you want it toiling around in the background. Can also vagrant halt/up to shutdown/reboot and cluster state (etcd) is preserved.