This will walk you through setting up GlusterFS backed by Kubernetes Persistent Volume Claims.
Table of Contents |
---|
Get Started on GCE
More details: Labs Workbench on GKE
...
- git clone https://github.com/bodom0015/gluster -b NDS-785 && cd gluster/templates/
- kubectl create -f glusterclaim1.svc.yaml json -f claim1claim2.json
- kubectl get pvc,pv
- kubectl create -f claim2gluster.svc.json yaml -f gluster.rc.yaml
- export ENDPOINT_1=$(kubectl get ep glusterfs-cluster | grep -v ENDPOINTS | awk '{print $2}' | awk 'BEGIN { FS=","; }{print $1}' | awk 'BEGIN { FS=":"; }{print $1}')
- export ENDPOINT_2=$(kubectl get ep glusterfs-cluster | grep -v ENDPOINTS | awk '{print $2}' | awk 'BEGIN { FS=","; }{print $2}' | awk 'BEGIN { FS=":"; }{print $1}')
- kubectl exec -it `kubectl get pod | grep glfs-server-1 | awk '{print $1}'` sh
- gluster peer probe <endpoint ip>
- gluster volume create $VOLNAME $VOLSPEC <endpoint 1 from above>:/media/brick0 <endpoint 2 from above>:/media/brick0
- gluster volume start global
...
Code Block |
---|
apiVersion: v1 kind: Pod metadata: name: busybox1 namespace: default spec: containers: - image: busybox command: - sleep - "3600" volumeMounts: - name: "glfs" mountPath: "/var/glfs/global" imagePullPolicy: IfNotPresent name: busybox volumes: - name: glfs glusterfs: endpoints: glusterfs-cluster path: global restartPolicy: Always --- apiVersion: v1 kind: Pod metadata: name: busybox2 namespace: default spec: containers: - image: busybox command: - sleep - "3600" volumeMounts: - name: "glfs" mountPath: "/var/glfs/global" imagePullPolicy: IfNotPresent name: busybox volumes: - name: glfs glusterfs: endpoints: glusterfs-cluster path: global restartPolicy: Always |
...