Deploying via kubernetes/contrib/ansible

See: https://github.com/nds-org/ndslabs-deploy-tools

More details: https://opensource.ncsa.illinois.edu/jira/browse/NDS-312

We've forked contrib/ansible and modified it slightly, before wrapping it in our own set of playbooks for deploying Labs.

These Ansible scripts are currently specific to NCSA / SDSC, but with little manual effort can be expanded to accommodate other OpenStack installations at other sites.

Deploying via kube-up.sh

See: https://kubernetes.io/docs/getting-started-guides/openstack-heat/

Features:

Usage

  1. docker run -it -v /home/core/private:/private bodom0015/k8-openstack-cli:NDS-767
  2. source /private/NDSLabsDev-openrc.sh
    1. Enter password
  3. ssh-keygen -t rsa
    1. Accept all defaults
  4. export KUBERNETES_PROVIDER='openstack-heat'; curl -sS https://get.k8s.io | bash && export PATH=$PATH:/kubernetes/client/bin/
  5. Apply these changes locally before provisioning: https://github.com/kubernetes/kubernetes/commit/40e4e0e4b4da2bca43ec59357d0b2c67caa922d6
  6. export EXTERNAL_NETWORK=ext-net FIXED_NETWORK_CIDR=192.168.100.0/24 
  7. export MASTER_FLAVOR=m1.medium MINION_FLAVOR=m1.large
  8. export STACK_NAME=kubernetes
  9. export KUBERNETES_PROVIDER=openstack-heat && cd /kubernetes && ./cluster/kube-up.sh

This will create a stack named kubernetes under the Horizon dashboard's Orchestration → Stacks view. Here, you can check the progress of the creation task by choosing a Stack and viewing its Resources tab.

The whole process takes about 10 minutes to finish. You should see instances being spun up in the project's Compute → Instances view.

You should see the following output:

[root@34268c72ab27 kubernetes]# export KUBERNETES_PROVIDER=openstack-heat && cd /kubernetes && ./cluster/kube-up.sh
... Starting cluster using provider: openstack-heat
... calling verify-prereqs
swift client installed
glance client installed
nova client installed
heat client installed
openstack client installed
... calling verify-kube-binaries
... calling kube-up
kube-up for provider openstack-heat
[INFO] Execute commands to create Kubernetes cluster
[INFO] Uploading kubernetes-server-linux-amd64.tar.gz
kubernetes-server.tar.gz
[INFO] Uploading kubernetes-salt.tar.gz
kubernetes-salt.tar.gz
[INFO] Image CentOS7 already exists
[INFO] Key pair already exists
Stack not found: kubernetes
[INFO] Retrieve new image ID
[INFO] Image Id 682f4eb3-97e3-4ebd-88fa-bf9b563bbd2d
[INFO] Create stack kubernetes
+---------------------+-----------------------------------------------------------------------------------------------------------------------------------------+
| Field               | Value                                                                                                                                   |
+---------------------+-----------------------------------------------------------------------------------------------------------------------------------------+
| id                  | cc7b544e-2758-440c-9dee-b42afba536c4                                                                                                    |
| stack_name          | kubernetes                                                                                                                              |
| description         | Kubernetes cluster with one master and one or more worker nodes (as specified by the number_of_minions parameter, which defaults to 3). |
|                     |                                                                                                                                         |
| creation_time       | 2017-03-15T17:27:34Z                                                                                                                    |
| updated_time        | None                                                                                                                                    |
| stack_status        | CREATE_IN_PROGRESS                                                                                                                      |
| stack_status_reason |                                                                                                                                         |
+---------------------+-----------------------------------------------------------------------------------------------------------------------------------------+
... calling validate-cluster
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_FAILED
Cluster not created. Please check stack logs to find the problem
Done, listing cluster services:

The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@34268c72ab27 kubernetes]# 

Given the output above, it looks as if it's failing due to checking localhost instead of the master's IP

Deploying via kargo

See: https://kubernetes.io/docs/getting-started-guides/kargo/

More details: https://github.com/kubernetes-incubator/kargo

Features:

  • Can be deployed on AWS, GCE, Azure, OpenStack or Baremetal
  • High available cluster
  • Composable (Choice of the network plugin for instance)
  • Support most popular Linux distributions
  • Continuous integration tests

Requirements:

  1. Ansible v2.2 (or newer) and python-netaddr is installed on the machine that will run Ansible commands
  2. Jinja 2.8 (or newer) is required to run the Ansible Playbooks
  3. The target servers must have access to the Internet in order to pull docker images.
  4. The target servers are configured to allow IPv4 forwarding.
  5. Your ssh key must be copied to all the servers part of your inventory.
  6. The firewalls are not managed, you'll need to implement your own rules the way you used to. in order to avoid any issue during deployment you should disable your firewall.
  7. The target servers must already be provisioned (this can be accomplished using kargo-cli)

First impressions:

  1. Looks like a very involved process, despite being partially automated (i.e. creating instances, networks, etc is not automated)
  2. Specific requirements that we may not meet (both for OpenStack and for the deployment machine)

Setup:

  1. docker run -it -v /home/smana/kargoconf:/etc/kargo quay.io/smana/k8s-kargocli:latest /bin/bash

Usage:

  1. docker run -it -v /home/core/private:/root/SAVED_AND_SENSITIVE_VOLUME ndslabs/deploy-tools:latest
  2. pip install python-netaddr    # This did not work for me

 

Deploying via kargo (redux)

I had reasonable success deploying with Kargo from my dev VM on Nebula.

  • The default Docker container didn't contain the Python OpenStack client and dependencies, so I created my own
  • I tried deploying with Fedora 25 cloud and CoreOS, but apparently Kargo has a hard requirement that the VM image have both python and fedora installed. I ended up going with fedora and manually installing python and docker-ce on each node, which could be automated or added to the base image
  • docker run -it -v `pwd`/kargoconf:/etc/kargo craigwillis/kargo bash

 

kargoconf/kargo.yml (these can also be flags)

# Common options
# ---------------
kargo_git_repo: "https://github.com/kubespray/kargo.git"
loglevel: "info"

# OpenStack options
# ---
os_auth_url: "http://nebula.ncsa.illinois.edu:5000/v2.0"
os_username: "me"
os_password: "my-password"
os_project_name: "NDSLabsDev"
masters_flavor: "m1.medium"
nodes_flavor: "m1.medium"
etcds_flavor: "m1.medium"
image: "fedora-25-cloud"
network: "NDSLabsDev"
sshkey: "my-key"

 

Then it just required two commands (both produce tons o' ansible output

kargo openstack --nodes 3
kargo deploy -k your.pem -u <os user i.e., fedora>
 

At this point, I have a 3-node kubernetes cluster with 2 masters and 2 etcds.  I manually assigned a public IP to one node, manually assigned a label (ndslabs-public-ip: true) and selector to the loadbalancer. Deployed via ndslabs-startup and all is good.

 

Pros:

  • It works and is maintained by someone else
  • It's almost identical to our ansible deploy process.

Cons:

  • No /etc/hosts entries in container, so I must ssh to IP, but this could easily be fixed
  • Kubectl doesn't work from kargo container, so I must ssh to node, but this could easily be fixed

 

  • No labels