Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Once again, we're talking about the storage model in Workbench 

Jira
serverJIRA
serverIdb14d4ad9-eb00-3a94-88ac-a843fb6fa1ca
keyNDS-775
. As of today, we're still running a very old Kubernetes (1.5.x) deployed via our obscure deploy-tools ansible process with a home-grown, in-cluster GlusterFS mounted to worker nodes and pods via hostPath. A long standing feature/requirement for Workbench has been the idea of a shared home directory across multiple pods/nodes for each user, which requires the ability to mount the filesystem read/write to many containers.  The Kubernetes storage architecture, particularly support for persistent volume claims, is generally supported for read-write-once volumes.

With the advent of the Zonca method for deploying JupyterHub over Rook, we revisit Rook (Starting with 0.6).  This at first looked quite promising – a straightforward helm based install provided near immediate support for PVCs (for the single-use volume scenario).  Shared filesystem support proved less exciting.  We spent cycles troubleshooting what amounts to an unclear setting in the examples (metadataPool replication number needs to match the number of storage nodes).  There's also the 4.7 kernel requirement, which is a good thing, but multiple shared filesystems are still experimental.  After some initial investigation we're back to the question of whether these complex methods are really any better than an NFS fileserver.

Here's what we know

Image Removed

A few notes:

  • We have Rook working via the Zonca method effectively for JupyterHub
  • We now have Rook shared storage via flexVolume thanks to Rookto Mike's work and the Rook
    • But this caused some hair pulling, and realization that under the covers Ceph is perhaps more complex than Gluster
  • We know that AWS EFS supports NFSv4 (and probably SMB)
    • EFS appears to be the only ReadWriteMany storage option with PVC support
  • We know that GCE Filers support NFS, SMB and GlusterWe know that EFS supports NFSv4 (and probably SMB
  • We know that AzureDisk supports SMB
  • We've discussed two broad options
    • Try and get rook running everywhere (OpenStack, AWS, GCE, Azure)
    • Replace current GlusterFS with NFS and rely on existing fileservers 
  • EFS appears to be the only ReadWriteMany storage option with PVC support
    • file servers implementations


Getting rook running everywhere

...

Image Added