Once again, we're talking about the storage model in Workbench . As of today, we're still running a very old Kubernetes (1.5.x) deployed via our obscure deploy-tools ansible process with a home-grown, in-cluster GlusterFS mounted to worker nodes and pods via hostPath. A long standing feature/requirement for Workbench has been the idea of a shared home directory across multiple pods/nodes for each user, which requires the ability to mount the filesystem read/write to many containers.  The Kubernetes storage architecture, particularly support for persistent volume claims, is generally supported for read-write-once volumes.

With the advent of the Zonca method for deploying JupyterHub over Rook, we revisit Rook (Starting with 0.6).  This at first looked quite promising – a straightforward helm based install provided near immediate support for PVCs (for the single-use volume scenario).  Shared filesystem support proved less exciting.  We spent cycles troubleshooting what amounts to an unclear setting in the examples (metadataPool replication number needs to match the number of storage nodes).  There's also the 4.7 kernel requirement, which is a good thing, but multiple shared filesystems are still experimental.  After some initial investigation we're back to the question of whether these complex methods are really any better than an NFS fileserver.

Here's what we know:


Getting rook running everywhere


Using NFS everywhere: