You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Next »

Cluster Boot Requirements

Multi-node Requirements

NDS-191 - Getting issue details... STATUS

  • OpenStack Deploy
    • Do these tasks already include deployment of each type of node necessary?
      • etcd
      • k8 master
      • k8 worker
      • gluster
  • Kubernetes Deploy
    • This should be ready for multi-node, right?
  • Gluster Deploy
    • What types / how many of each type of nodes?
  • NDS Labs Deploy
    • What types / how many of each type of nodes?
      • single node for gui + apiserver?
      • one node each for gui / apiserver?

 

Ansible Overview

See https://github.com/nds-org/ndslabs-deploy-tools/tree/master/FILES.deploy-tools/usr/local/lib/ndslabs/ansible/roles

  • Playbooks
    • Tasks
  • Inventories
    • Roles
    • Groups

Configuration

Ansible configuration is held in FILES.deploy-tools/etc/ansible

Running Ansible

OpenRC File

This file obtains everything that ansible needs to deploy virtual machine instances to OpenStack.

  • Log into nebula interface
  • On the left side click "Access & Security"
  • On the top tab bar, click "API Access"
  • On the top right, click "Download OpenStack RC File"

Map this file with -v path/to/openrc/file:/root/SAVED_AND_SENSITIVE_VOLUME

OpenStack Deployment

Execute the following command to run the deploy-tools container, substituting in the location of your OpenRC file:

docker run -it --rm --net=host -v path/to/openrc/dir:/root/SAVED_AND_SENSITIVE_VOLUME ndslabs/deploy-tools:243 usage docker

This container has ansible, nova, cinder, and other tools necessary to programmatically create and delete machine instances on Nebula.

Now that you're in the container, source the openrc file mapped into /root/SAVED_AND_SENSITIVE_VOLUME

source /root/SAVED_AND_SENSITIVE_VOLUME/PROJECT-openrc.sh

Copy and modify that example cluster inventory to your liking:

cp inventory/example inventory/new-cluster
vi inventory/new-cluster

An example inventory is given below:

#
# Servers 
#
# These are the target instance names on Nebula.
# The [x:y] notation defines a range, allowing you to specify quantities for each machine type.
#
[etcd]
c1_etcd[1:3]

[glfs]
c1_gfs[1:4]

[master]
c1_master[1:2]

[k8compute]
c1_node[1:3]

[loadbal]
c1_loadbal

#
# Groups - assigned according to roles
#
# The server groups defined above are then further grouped into "cluster1".
# "cluster1" is then assigned the roles/tasks for "openstack" and "coreos"
#
[cluster1:children]
etcd
glfs
master
k8compute
loadbal

# These groups are configured in /root/group_vars
# The group are assigned roles in /root/playbooks/kubernetes-infrastructure.yml
# The roles themselves are defined in /usr/local/lib/ndslabs/ansible/roles
[openstack:children]
cluster1

[coreos:children]
cluster1

[publicip:children]
loadbal

#
# Cluster-Specific Group Variables/overrides
#
# Configure GlusterFS volumes for desired bricks
# This includes the volume sizes and mount paths necessary for Gluster
# 
# Configure "cluster1" group to create or reuse the SSH key called "cluster1" 
# Default the "cluster1" group to m1.medium CoreOSAlpha instances
#
[glfs:vars]
vol_size= 200
vol_name= brick0
mount_path= /media/brick0
service_name=media-brick0.mount

[cluster1:vars]
key_name= cluster1
# Default image/flavor
image= CoreOSAlpha
flavor= m1.medium

Run the Ansible playbook kubernetes-infrastructure.yml with your modified inventory file.

ansible-playbook -i inventory/new-cluster playbooks/kubernetes-infrastructure.yml

Ansible should then spin up your desired configuration within a few minutes.

 

  • No labels