Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Our volume is now being served out to the cluster over NFS, and we are ready for our clients to mount the volume.

Adding a Brick

Suppose we have a simple replicated gluster volume with 2 bricks, and we want to expand the storage it contains:

Code Block
root@workshop1-node1:/# gluster volume info global
 
Volume Name: global
Type: Replicate
Volume ID: ca59a98e-c959-454e-8ac3-9082b0ed2856
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.100.122:/media/brick0/brick
Brick2: 192.168.100.116:/media/brick0/brick
Options Reconfigured:
nfs.disable: on
performance.readdir-ahead: on
transport.address-family: inet
features.quota: on
features.inode-quota: on
features.quota-deem-statfs: on


Provision and attach a new OpenStack volume to your existing instance, then format it with XFS:

Code Block
core@workshop1-node1 ~ $ sudo mkfs -t xfs /dev/vde
meta-data=/dev/vde               isize=256    agcount=4, agsize=13107200 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=52428800, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=25600, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

You will then need to build up a *.mount file as below:

Code Block
$ vi media-brick1.mount
[Unit]
Description=Mount OS_DEVICE on MOUNT_PATH
After=local-fs.target

[Mount]
What=OS_DEVICE
Where=MOUNT_PATH
Type=FS_TYPE
Options=noatime

[Install]
WantedBy=multi-user.target

where:

  • OS_DEVICE is the source device in /dev where your raw volume is mounted (i.e. /dev/vde)
  • MOUNT_PATH is the target mount path where your data should be mounted (i.e. /media/brick1)
  • FS_TYPE is a string of which filesystem will be formatted on the new volume (i.e. xfs)

Place this file in /etc/systemd/system/

Finally, start and enable your service to mount the volume to CoreOS and ensure it is remounted on restart:

Code Block
sudo mv media-brick1.mount /etc/systemd/system/media-brick1.mount
sudo systemctl daemon-reload
sudo systemctl start media-brick1.mount
sudo systemctl enable  media-brick1.mount
sudo systemctl unmask  media-brick1.mount


You will need to perform the above steps on each of your GLFS servers before continuing

Now you'll need to exec into one of the GLFS server pods and perform the following:

Code Block
# Peer probe the other IP in the cluster (gluster service IP also seems to work)
$ gluster peer probe 10.254.202.236 
peer probe: success. Host 192.168.100.1 port 24007 already in peer list


# This one fails because we did not include our new brick's second replica
$ gluster volume add-brick global 192.168.100.122:/media/brick1                                                                                             
volume add-brick: failed: Incorrect number of bricks supplied 1 with count 2


# This one fails because we need a sub-directory of the mount point
$ gluster volume add-brick global 192.168.100.122:/media/brick1 192.168.100.116:/media/brick1 
volume add-brick: failed: The brick 192.168.100.116:/media/brick1 is a mount point. Please create a sub-directory under the mount point and use that as the brick directory. Or use 'force' at the end of the command if you want to override this behavior.


# This one works! :D
$ gluster volume add-brick global 192.168.100.122:/media/brick1/brick 192.168.100.116:/media/brick1/brick
volume add-brick: success

And now we can see that our new brick has been added to the existing volume:

Code Block
root@workshop1-node1:/# gluster volume info global
 
Volume Name: global
Type: Distributed-Replicate
Volume ID: ca59a98e-c959-454e-8ac3-9082b0ed2856
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.100.122:/media/brick0/brick
Brick2: 192.168.100.116:/media/brick0/brick
Brick3: 192.168.100.122:/media/brick1/brick
Brick4: 192.168.100.116:/media/brick1/brick
Options Reconfigured:
nfs.disable: on
performance.readdir-ahead: on
transport.address-family: inet
features.quota: on
features.inode-quota: on
features.quota-deem-statfs: on


Client Setup

Create the gluster-client DaemonSet using kubectl:

...