Overview

See NDS Labs Deploy Tools

Ansible configuration is held in FILES.deploy-tools/etc/ansible/ansible.cfg

/root contains most of the interesting bits:

  • Playbooks - declare order and assignment of tasks / roles, defined elsewhere in /usr/local/lib/ndslabs/ansible/roles
    • Tasks / Roles - declare the commands executed by each task / role
  • Inventories - declarative file containing desired configuration
    • Servers - names and quantity of each node type
      • etcd: kubernetes / ndslabs key-value store dedicated node
      • glfs: glusterfs storage nodes
      • master: kubernetes master node
      • k8compute: compute / worker nodes
      • loadbal: load balancer node (requires public ip)
    • Groups
      • cluster1 - groups nodes and quantites
      • openstack - groups all openstack targets together (all nodes for now)
      • coreos - groups all coreos instances together (all nodes for now)
      • publicip - groups together all machines requiring a public ip (only load balancer for now)
    • Configuration
      • GlusterFS bricks / OpenStack volumes
      • OpenStack instance SSH key / image / flavor
  • Group Vars - configuration options for each group
    • openstack
    • coreos
    • publicip

Running Ansible

OpenRC File

This file obtains everything that ansible needs to deploy virtual machine instances to OpenStack.

  • Log into nebula interface
  • On the left side click "Access & Security"
  • On the top tab bar, click "API Access"
  • On the top right, click "Download OpenStack RC File"

Map this file with -v path/to/openrc/file:/root/SAVED_AND_SENSITIVE_VOLUME

Run deploy-tools

Execute the following command to run the deploy-tools container, substituting in the location of your OpenRC file:

docker run -it --rm --net=host -v path/to/openrc/dir:/root/SAVED_AND_SENSITIVE_VOLUME ndslabs/deploy-tools:243 usage docker

This container has ansible, nova, cinder, and other tools necessary to programmatically create and delete machine instances on Nebula.

Source openrc.sh

Now that you're in the container, source the openrc file mapped into /root/SAVED_AND_SENSITIVE_VOLUME

source /root/SAVED_AND_SENSITIVE_VOLUME/PROJECT-openrc.sh

Build a Cluster Inventory

Copy and modify the example cluster inventory to your liking:

cp inventory/example inventory/new-cluster
vi inventory/new-cluster

Example

An example inventory is explained in detail below:

#
# Servers 
#
# These are the target instance names on Nebula.
# The [x:y] notation defines a range, allowing you to specify quantities for each machine type.
#
[etcd]
c1_etcd[1:3]

[glfs]
c1_gfs[1:4]

[master]
c1_master[1:2]

[k8compute]
c1_node[1:3]

[loadbal]
c1_loadbal

#
# Groups - assigned according to roles
#
# The server groups defined above are then further grouped into "cluster1".
# "cluster1" is then assigned the roles/tasks for "openstack" and "coreos"
#
[cluster1:children]
etcd
glfs
master
k8compute
loadbal

# These groups are configured in /root/group_vars
# The group are assigned roles in /root/playbooks/kubernetes-infrastructure.yml
# The roles themselves are defined in /usr/local/lib/ndslabs/ansible/roles
[openstack:children]
cluster1

[coreos:children]
cluster1

[publicip:children]
loadbal

#
# Cluster-Specific Group Variables/overrides
#
# Configure GlusterFS volumes for desired bricks
# This includes the volume sizes and mount paths necessary for Gluster
# 
# Configure "cluster1" group to create or reuse the SSH key called "cluster1" 
# Default the "cluster1" group to m1.medium CoreOSAlpha instances
#
[glfs:vars]
vol_size= 200
vol_name= brick0
mount_path= /media/brick0
service_name=media-brick0.mount

[cluster1:vars]
key_name= cluster1
# Default image/flavor
image= CoreOSAlpha
flavor= m1.medium

Run ansible-playbook

Run the Ansible playbook kubernetes-infrastructure.yml with your modified inventory file.

ansible-playbook -i inventory/new-cluster playbooks/kubernetes-infrastructure.yml

Ansible should then spin up your desired configuration within a few minutes.

 

  • No labels