...
- Creates an Auto-Scaling Kubernetes using Openstack Heat
Usage
docker run -it -v /home/core/private:/private bodom0015/k8-openstack-cli:NDS-767
source /private/NDSLabsDev-openrc.sh
Enter password
ssh-keygen -t rsa
Accept all defaults
export KUBERNETES_PROVIDER='openstack-heat'; curl -sS https://get.k8s.io | bash && export PATH=$PATH:/kubernetes/client/bin/
- Apply these changes locally before provisioning: https://github.com/kubernetes/kubernetes/commit/40e4e0e4b4da2bca43ec59357d0b2c67caa922d6
export EXTERNAL_NETWORK=ext-net FIXED_NETWORK_CIDR=192.168.100.0/24
export MASTER_FLAVOR=m1.medium MINION_FLAVOR=m1.large
export STACK_NAME=kubernetes
export KUBERNETES_PROVIDER=openstack-heat && cd /kubernetes && ./cluster/kube-up.sh
This will create a stack named kubernetes under the Horizon dashboard's Orchestration → Stacks view. Here, you can check the progress of the creation task by choosing a Stack and viewing its Resources tab.
...
Code Block | ||
---|---|---|
| ||
[root@34268c72ab27 kubernetes]# export KUBERNETES_PROVIDER=openstack-heat && cd /kubernetes && ./cluster/kube-up.sh ... Starting cluster using provider: openstack-heat ... calling verify-prereqs swift client installed glance client installed nova client installed heat client installed openstack client installed ... calling verify-kube-binaries ... calling kube-up kube-up for provider openstack-heat [INFO] Execute commands to create Kubernetes cluster [INFO] Uploading kubernetes-server-linux-amd64.tar.gz kubernetes-server.tar.gz [INFO] Uploading kubernetes-salt.tar.gz kubernetes-salt.tar.gz [INFO] Image CentOS7 already exists ERROR (CommandError): No keypair with a name or ID of 'kubernetes_keypair' exists. [INFO] Key pair already createdexists Stack not found: kubernetes [INFO] Retrieve new image ID [INFO] Image Id 682f4eb3-97e3-4ebd-88fa-bf9b563bbd2d [INFO] Create stack kubernetes +---------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +---------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ | id | f26624a4cc7b544e-eca62758-4fed440c-b0da9dee-40f5b5ae64e9b42afba536c4 | | stack_name | kubernetes | | description | Kubernetes cluster with one master and one or more worker nodes (as specified by the number_of_minions parameter, which defaults to 3). | | | | | creation_time | 2017-03-15T1615T17:5227:09Z34Z | | updated_time | None | | stack_status | CREATE_IN_PROGRESS | | stack_status_reason | | +---------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ ... calling validate-cluster Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_IN_PROGRESS Cluster status CREATE_FAILED Cluster not created. Please check stack logs to find the problem Done, listing cluster services: The connection to the server localhost:8080 was refused - did you specify the right host or port? [root@34268c72ab27 kubernetes]# |
Given the output above, it looks as if it's failing due to checking localhost instead of the master's IP
Deploying via kargo
See: https://kubernetes.io/docs/getting-started-guides/kargo/
More details: https://github.com/kubernetes-incubator/kargo
Features:
- Can be deployed on AWS, GCE, Azure, OpenStack or Baremetal
- High available cluster
- Composable (Choice of the network plugin for instance)
- Support most popular Linux distributions
- Continuous integration tests
Requirements:
- Ansible v2.2 (or newer) and python-netaddr is installed on the machine that will run Ansible commands
- Jinja 2.8 (or newer) is required to run the Ansible Playbooks
- The target servers must have access to the Internet in order to pull docker images.
- The target servers are configured to allow IPv4 forwarding.
- Your ssh key must be copied to all the servers part of your inventory.
- The firewalls are not managed, you'll need to implement your own rules the way you used to. in order to avoid any issue during deployment you should disable your firewall.
- The target servers must already be provisioned (this can be accomplished using kargo-cli)
First impressions:
- Looks like a very involved process, despite being partially automated (i.e. creating instances, networks, etc is not automated)
- Specific requirements that we may not meet
Setup:
docker run -it -v /home/smana/kargoconf:/etc/kargo quay.io/smana/k8s-kargocli:latest /bin/bash
Usage:
docker run -it -v /home/core/private:/root/SAVED_AND_SENSITIVE_VOLUME ndslabs/deploy-tools:latest
pip install python-netaddr