You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

This page documents the preliminary requirements and design for volume management in the NDS Labs Workbench system.

See also https://github.com/nds-org/gluster

Requirements

  • Kubernetes support: Since the workbench is based on Kubernetes, storage options must be supported or supportable in the Kubernetes framework. This means that the storage can be mounted in a Pod/container.
  • Distributed:  Storage solution must support distribution across multiple nodes.
  • Quotas: Storage solution must support per-project quotas
  • Scalability: Storage solution must be scalable
  • Reliability: Storage solution must be reliable, or support a balance in the reliability/performance trade-off
  • Performance: Storage solution must be performant, or support a balance in the performance/reliability trade-off

Storage options

We have explored the following storage options:

StorageProsCons
Local storage

Supported by Kubernetes

Simple

Useful for single-node or development configurations

Not distributed

Not scalable

Does not support quotas

NFSSupported by Kubernetes 
OpenStack Cinder

Supportable by Kubernetes (with changes)

Requires change to Kubernetes
GlusterFSSupported by Kubernetes 

 

Creating a GlusterFS cluster

The basic steps for creating a GlusterFS cluster on Nebula include:

  1. Allocate GlusterFS server VMs
  2. Allocate OpenStack volumes, one volume per GFS node (e.g., 500GB per node)
  3. For each VM/volume
    1. Initialize physical volume
    2. Create volume group, logical volumes, and bricks.
    3. Mount bricks to server
  4. Start the gluster server container
    1. Initialize gluster FS servers
    2. Create gluster volumes
  5. Mount volumes to GFS VM
  6. Configure storage: physical volumes, volume group, logical volumes, bricks

Managing volumes for projects:

Currently, we are planning to allocate one volume per project.  Volumes will be expandable and deleted when the project is deleted

  • Create volume:
    • gluster create
    • gluster start
  • Delete volume
    • gluster stop
    • gluster delete 
  • Expanding volume
    • ?

Expanding cluster-wide storage:

Cluster-wide storage is expanded by adding allocating, formatting, and configuring additional physical volumes.

Use cases:

  • Site administrator can deploy (and re-deploy) cluster storage system
  • Cluster administrator can create, expand, and delete per-project volumes
  • Project administrators can deploy services that use project storage
  • Project administrators can view project storage quotas and usage
  • Project administrators can request storage increases
  • No labels