Table of Contents |
---|
Getting Started
Clone the GlusterFS repo containing the necessary Kubernetes specs:
Code Block | ||
---|---|---|
| ||
git clone https://github.com/nds-org/gluster.git cd gluster/ |
Server Setup
Create the gluster-server DaemonSet using kubectl:
...
The steps below then only to be done from inside of a single glusterfs-server container.
Alternative: Raw Docker
Code Block | ||
---|---|---|
| ||
docker run --name=gfs --net=host --pid=host --privileged -v /dev:/dev -v <ABSOLUTE_PATH_TO_SHARED_DATA>:/var/glfs -v /run:/run -v /:/media/host -it -d gluster:local |
Getting into a Server Container
Using kubectl, exec into one of the GlusterFS servers:
...
Take note of all node IPs that are running glusterfs-server pods. You will need these IPs to finish configuring GlusterFS.
Peer Probe
Once inside of the gluster server container, perform a peer probe on all other gluster nodes.
...
Code Block | ||
---|---|---|
| ||
root@willis-k8-test-gluster:/# gluster peer probe 192.168.100.156 |
Create Volume
Ansible has already created the placeholder directories for bricks, we just need to create and start a Gluster volume pointing to the different brick directories on each node.
...
Code Block | ||
---|---|---|
| ||
root@willis-k8-test-gluster:/# gluster volume list ndslabs root@willis-k8-test-gluster:/# gluster volume status Volume ndslabs is not started |
Reusing a Volume
Simply add force to the end of your volume create command to force GlusterFS to reuse a volume that is no longer accessible:
...
Code Block | ||
---|---|---|
| ||
root@willis-k8-test-gluster:/# rm -rf /path/to/brick0 root@willis-k8-test-gluster:/# mkdir -p /path/to/brick0 |
Start Volume
Now that we have created our volume, we must start it in order for clients to mount it:
...
Our volume is now being served out to the cluster over NFS, and we are ready for our clients to mount the volume.
Adding a Brick
Suppose we have a simple replicated gluster volume with 2 bricks, and we are running low on space... we want to expand the storage it contains:
...
Code Block |
---|
core@workshop1-node1 ~ $ df Filesystem 1K-blocks Used Available Use% Mounted on devtmpfs 16460056 0 16460056 0% /dev tmpfs 16476132 0 16476132 0% /dev/shm tmpfs 16476132 1792 16474340 1% /run tmpfs 16476132 0 16476132 0% /sys/fs/cgroup /dev/vda9 38216204 256736 36301120 1% / /dev/mapper/usr 1007760 639352 316392 67% /usr tmpfs 16476132 17140 16458992 1% /tmp tmpfs 16476132 0 16476132 0% /media /dev/vda1 130798 39292 91506 31% /boot /dev/vda6 110576 64 101340 1% /usr/share/oem /dev/vdb 41922560 6023732 35898828 15% /var/lib/docker /dev/vdc 10475520 626360 9849160 6% /media/storage /dev/vdd 104806400 49157820 55648580 47% /media/brick0 192.168.100.122:global 314419200 49191424 265227776 16% /var/glfs/global tmpfs 3295224 0 3295224 0% /run/user/500 /dev/vde 209612800 33088 209579712 1% /media/brick1 root@workshop1-node1:/# gluster volume info global Volume Name: global Type: Distributed-Replicate Volume ID: ca59a98e-c959-454e-8ac3-9082b0ed2856 Status: Started Snapshot Count: 0 Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 192.168.100.122:/media/brick0/brick Brick2: 192.168.100.116:/media/brick0/brick Brick3: 192.168.100.122:/media/brick1/brick Brick4: 192.168.100.116:/media/brick1/brick Options Reconfigured: nfs.disable: on performance.readdir-ahead: on transport.address-family: inet features.quota: on features.inode-quota: on features.quota-deem-statfs: on |
Client Setup
Create the gluster-client DaemonSet using kubectl:
...
Once each client container starts, it will mount the GlusterFS volume to each compute host using NFS.
Testing
Once the clients are online, we can run a simple test of GlusterFS to ensure that it is correctly serving and synchronizing the volume.
...
Create two SSH sessions - one into each compute node (in this case, 192.168.100.65 and 192.168.100.66).
First Session
In one SSH session, run a BusyBox image mounted with our shared volume:
...
Code Block | ||
---|---|---|
| ||
echo "testing!" > /var/glfs/ndslabs/test.file |
Second Session
On the other machine, test that mapping the same directory into BusyBox we can see the changes from the first host:
...