Table of Contents |
---|
Cluster Boot Requirements
- OpenStack Deploy: https://github.com/nds-org/ndslabs-deploy-tools
- Kubernetes Deploy (including optional tools - logging, monitoring, etc)
- new ticket:
Jira server JIRA serverId b14d4ad9-eb00-3a94-88ac-a843fb6fa1ca key NDS-235 - new ticket:
Jira server JIRA serverId b14d4ad9-eb00-3a94-88ac-a843fb6fa1ca key NDS-236
- new ticket:
- Gluster Deploy
Jira server JIRA serverId b14d4ad9-eb00-3a94-88ac-a843fb6fa1ca key NDS-209 Jira server JIRA serverId b14d4ad9-eb00-3a94-88ac-a843fb6fa1ca key NDS-227 - new ticket:
Jira server JIRA serverId b14d4ad9-eb00-3a94-88ac-a843fb6fa1ca key NDS-237 - See https://github.com/nds-org/gluster for processes
- NDS Labs Deploy
- new ticket:
Jira server JIRA serverId b14d4ad9-eb00-3a94-88ac-a843fb6fa1ca key NDS-238 - See https://github.com/nds-org/ndslabs-system-shell for processes
- new ticket:
Multi-node Requirements
Jira | ||||||
---|---|---|---|---|---|---|
|
- OpenStack Deploy
- Do these tasks already include deployment of each type of node necessary?
- etcd
- k8 master
- k8 worker
- gluster
- Do these tasks already include deployment of each type of node necessary?
- Kubernetes Deploy
- This should be ready for multi-node, right?
- Gluster Deploy
- What types / how many of each type of nodes?
- NDS Labs Deploy
- What types / how many of each type of nodes?
- single node for gui + apiserver?
- one node each for gui / apiserver?
- What types / how many of each type of nodes?
Ansible Overview
- Playbooks
- Tasks
- Inventories
- Roles
- Groups
Configuration
...
Overview
Ansible configuration is held in FILES.deploy-tools/etc/ansible/ansible.cfg
/root contains most of the interesting bits:
- Playbooks - declare order and assignment of tasks / roles, defined elsewhere in /usr/local/lib/ndslabs/ansible/roles
- Tasks / Roles - declare the commands executed by each task / role
- Inventories - declarative file containing desired configuration
- Servers - names and quantity of each node type
- etcd: kubernetes / ndslabs key-value store dedicated node
- glfs: glusterfs storage nodes
- master: kubernetes master node
- k8compute: compute / worker nodes
- loadbal: load balancer node (requires public ip)
- Groups
- cluster1 - groups nodes and quantites
- openstack - groups all openstack targets together (all nodes for now)
- coreos - groups all coreos instances together (all nodes for now)
- publicip - groups together all machines requiring a public ip (only load balancer for now)
- Configuration
- GlusterFS bricks / OpenStack volumes
- OpenStack instance SSH key / image / flavor
- Servers - names and quantity of each node type
- Group Vars - configuration options for each group
- openstack
- coreos
- publicip
Running Ansible
OpenRC File
...
Map this file with -v path/to/openrc/file:/root/SAVED_AND_SENSITIVE_VOLUME
...
Run deploy-tools
Execute the following command to run the deploy-tools container, substituting in the location of your OpenRC file:
...
This container has ansible, nova, cinder, and other tools necessary to programmatically create and delete machine instances on Nebula.
Source openrc.sh
Now that you're in the container, source the openrc file mapped into /root/SAVED_AND_SENSITIVE_VOLUME
Code Block | ||
---|---|---|
| ||
source /root/SAVED_AND_SENSITIVE_VOLUME/PROJECT-openrc.sh |
Build a Cluster Inventory
Copy and modify that the example cluster inventory to your liking:
Code Block | ||
---|---|---|
| ||
cp inventory/example inventory/new-cluster vi inventory/new-cluster |
Example
An example inventory is given explained in detail below:
Code Block | ||
---|---|---|
| ||
# # Servers # # These are the target instance names on Nebula. # The [x:y] notation defines a range, allowing you to specify quantities for each machine type. # [etcd] c1_etcd[1:3] [glfs] c1_gfs[1:4] [master] c1_master[1:2] [k8compute] c1_node[1:3] [loadbal] c1_loadbal # # Groups - assigned according to roles # # The server groups defined above are then further grouped into "cluster1". # "cluster1" is then assigned the roles/tasks for "openstack" and "coreos" # [cluster1:children] etcd glfs master k8compute loadbal # These groups are configured in /root/group_vars # The group are assigned roles in /root/playbooks/kubernetes-infrastructure.yml # The roles themselves are defined in /usr/local/lib/ndslabs/ansible/roles [openstack:children] cluster1 [coreos:children] cluster1 [publicip:children] loadbal # # Cluster-Specific Group Variables/overrides # # Configure GlusterFS volumes for desired bricks # This includes the volume sizes and mount paths necessary for Gluster # # Configure "cluster1" group to create or reuse the SSH key called "cluster1" # Default the "cluster1" group to m1.medium CoreOSAlpha instances # [glfs:vars] vol_size= 200 vol_name= brick0 mount_path= /media/brick0 service_name=media-brick0.mount [cluster1:vars] key_name= cluster1 # Default image/flavor image= CoreOSAlpha flavor= m1.medium |
Run ansible-playbook
Run the Ansible playbook kubernetes-infrastructure.yml with your modified inventory file.
...