Clone the GlusterFS repo containing the necessary Kubernetes specs:
git clone https://github.com/nds-org/gluster.git cd gluster/ |
Create the gluster-server DaemonSet using kubectl:
kubectl create -f kubernetes/gluster-server-ds.yaml |
This spec runs the ndslabs/gluster container in "server mode" on Kubernetes nodes labeled with ndslabs-role=storage.
Once all of the server containers are up, we must tell them to cooperate with each other using the gluster CLI.
The steps below then only to be done from inside of a single glusterfs-server container.
docker run --net=host --pid=host --privileged -v /dev:/dev -v /mnt/dataset:/var/glfs -v /run:/run -v /:/media/host -it -d gluster:local |
Using kubectl, exec into one of the GlusterFS servers:
core@willis8-k8-test-1 ~ $ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE NODE coffee-rc-4u3pb 1/1 Running 0 12d 192.168.100.65 coffee-rc-5m4t6 1/1 Running 0 12d 192.168.100.65 default-http-backend-y98iw 1/1 Running 0 22h 192.168.100.64 glusterfs-server-hh5rm 1/1 Running 0 5d 192.168.100.156 glusterfs-server-zoefs 1/1 Running 0 5d 192.168.100.89 ndslabs-apiserver-zqgj8 1/1 Running 0 1d 192.168.100.66 ndslabs-gui-p0hjh 1/1 Running 0 23h 192.168.100.66 nginx-ilb-rc-x853y 1/1 Running 0 6d 192.168.100.64 tea-rc-8saiu 1/1 Running 0 12d 192.168.100.65 tea-rc-t403k 1/1 Running 0 12d 192.168.100.65 core@willis8-k8-test-1 ~ $ kubectl exec -it glusterfs-server-zoefs bash |
Take note of all node IPs that are running glusterfs-server pods. You will need these IPs to finish configuring GlusterFS.
Once inside of the gluster server container, perform a peer probe on all other gluster nodes.
Do not probe the host's own IP.
For example, since we are executing from 192.168.100.89, we must probe our other storage node:
root@willis-k8-test-gluster:/# gluster peer probe 192.168.100.156 |
Ansible has already created the placeholder directories for bricks, we just need to create and start a Gluster volume pointing to the different brick directories on each node.
This is done using gluster create volume as outlines below:
root@willis-k8-test-gluster:/# gluster volume create ndslabs transport tcp 192.168.100.89:/var/glfs/brick0 192.168.100.156:/var/glfs/ndslabs/brick0 |
NOTE: Our Ansible playbook mounts GlusterFS bricks at /media/brick0. We will need to update this in the future to be consistent throughout.
To be sure the volume was created successfully, you can run the following commands and see your new volume:
root@willis-k8-test-gluster:/# gluster volume list ndslabs root@willis-k8-test-gluster:/# gluster volume status Volume ndslabs is not started |
Simply add force to the end of your volume create command to force GlusterFS to reuse a volume that is no longer accessible:
root@willis-k8-test-gluster:/# gluster volume create ndslabs transport tcp 192.168.100.89:/media/brick0/brick/ndslabs 192.168.100.156:/media/brick0/brick/ndslabs volume create: ndslabs: failed: /media/brick0/brick/ndslabs is already part of a volume root@willis-k8-test-gluster:/# gluster volume create ndslabs transport tcp 192.168.100.89:/media/brick0/brick/ndslabs 192.168.100.156:/media/brick0/brick/ndslabs force volume create: ndslabs: success: please start the volume to access data |
The alternative solution would be to delete / recreate the mount point:
root@willis-k8-test-gluster:/# rm -rf /path/to/brick0 root@willis-k8-test-gluster:/# mkdir -p /path/to/brick0 |
Now that we have created our volume, we must start it in order for clients to mount it:
root@willis-k8-test-gluster:/# gluster volume start ndslabs volume start: ndslabs: success |
Our volume is now being served out to the cluster over NFS, and we are ready for our clients to mount the volume.
Create the gluster-client DaemonSet using kubectl:
kubectl create -f kubernetes/gluster-client-ds.yaml |
This spec runs the ndslabs/gluster container in "client mode" on Kubernetes nodes labeled with ndslabs-role=compute.
Once each client container starts, it will mount the GlusterFS volume to each compute host using NFS.
Once the clients are online, we can run a simple test of GlusterFS to ensure that it is correctly serving and synchronizing the volume.
From the Kubernetes master, run the following command to see which nodes are running the glusterfs-client containers:
core@willis8-k8-test-1 ~ $ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE NODE coffee-rc-4u3pb 1/1 Running 0 12d 192.168.100.65 coffee-rc-5m4t6 1/1 Running 0 12d 192.168.100.65 default-http-backend-y98iw 1/1 Running 0 23h 192.168.100.64 glusterfs-client-4hm9y 1/1 Running 0 5d 192.168.100.65 glusterfs-client-6c12y 1/1 Running 0 5d 192.168.100.66 glusterfs-server-hh5rm 1/1 Running 0 5d 192.168.100.156 glusterfs-server-zoefs 1/1 Running 0 5d 192.168.100.89 ndslabs-apiserver-zqgj8 1/1 Running 0 1d 192.168.100.66 ndslabs-gui-p0hjh 1/1 Running 0 23h 192.168.100.66 nginx-ilb-rc-x853y 1/1 Running 0 6d 192.168.100.64 tea-rc-8saiu 1/1 Running 0 12d 192.168.100.65 tea-rc-t403k 1/1 Running 0 12d 192.168.100.65 |
Create two SSH sessions - one into each compute node (in this case, 192.168.100.65 and 192.168.100.66).
In one SSH session, run a BusyBox image mounted with our shared volume:
docker run -v /var/glfs:/var/glfs --rm -it busybox |
Inside of the BusyBox container, create a test file:
echo "testing!" > /var/glfs/ndslabs/test.file |
On the other machine, test that mapping the same directory into BusyBox we can see the changes from the first host:
docker run -v /var/glfs:/var/glfs --rm -it busybox |
Running an ls on /var/glfs/ndslabs/ should show the test file created on the other node:
ls -al /var/glfs/ndslabs |
This proves that we can mount via NFS onto each node, map the NFS mount into containers, and allow those containers to ingest or modify the data from the NFS mount.