You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

Overview

Rook is an open-source distributed filesystem designed for use under Kubernetes, and is only supported on Kubernetes 1.7 or higher.

This document serves as a high-level overview or introduction to some basic concepts and patterns surrounding Rook.

All of this is explained in much better detail by their official documentation: https://rook.github.io/docs/rook/master/

Source code is also available here: https://github.com/rook/rook

Prerequisites

Minimum Version: Kubernetes v1.7 or higher is supported by Rook.

You will need to choose a hostPath for the dataDirHostPath (ensure that this has at least 5GB of free space).

You will also need to set up RBAC, and ensure that the Flex volume plugin has been configured.

Set the dataDirHostPath

If you are using dataDirHostPath to persist Rook data on Kubernetes hosts, make sure your host has at least 5GB of space available on the specified path.

Setting up RBAC

On Kubernetes 1.7+, you will need to configure Rook to use RBAC appropriately.

See https://rook.github.io/docs/rook/master/rbac.html

Flex Volume Configuration

The Rook agent requires setup as a Flex volume plugin to manage the storage attachments in your cluster. See the Flex Volume Configuration topic to configure your Kubernetes deployment to load the Rook volume plugin.

Getting Started

Now that we've examined each of the pieces, let's zoom out and see what we can do with the whole cluster.

For the quickest quick start, check out the Rook QuickStart guide: https://rook.github.io/docs/rook/master/quickstart.html

Getting Started without an Existing Kubernetes cluster

The easiest way to deploy a new Kubernetes cluster with Rook support on OpenStack (Nebula / SDSC) is to use the https://github.com/nds-org/kubeadm-terraform repository.

This may work for other cloud providers as well, but has not yet been thoroughly tested.

Getting Started on an Existing Kubernetes cluster

If you’re feeling lucky, a simple Rook cluster can be created with the following kubectl commands. For the more detailed install, skip to the next section to deploy the Rook operator.

kubectl create -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/rook-operator.yaml
kubectl create -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/rook-cluster.yaml

After the cluster is running, you can create block, object, or file storage to be consumed by other applications in your cluster.

For a more detailed look at the deployment process, see below.

Deploy the Rook Operator

The first step is to deploy the Rook system components, which include the Rook agent running on each node in your cluster as well as Rook operator pod.

kubectl create -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/rook-operator.yaml
# verify the rook-operator and rook-agents pods are in the `Running` state before proceeding
kubectl -n rook-system get pod

You can also deploy the operator with the Rook Helm Chart.

Restart Kubelet (Kubernetes 1.7.x only)

For versions of Kubernetes prior to 1.8, the Kubelet process on all nodes will require a restart after the Rook operator and Rook agents have been deployed. As part of their initial setup, the Rook agents deploy and configure a Flexvolume plugin in order to integrate with Kubernetes’ volume controller framework. In Kubernetes v1.8+, the dynamic Flexvolume plugin discovery will find and initialize our plugin, but in older versions of Kubernetes a manual restart of the Kubelet will be required.

Create a Rook Cluster

Now that the Rook operator and agent pods are running, we can create the Rook cluster. For the cluster to survive reboots, make sure you set the dataDirHostPath property. For more settings, see the documentation on configuring the cluster.

Save the cluster spec as rook-cluster.yaml:

apiVersion: v1
kind: Namespace
metadata:
  name: rook
---
apiVersion: rook.io/v1alpha1
kind: Cluster
metadata:
  name: rook
  namespace: rook
spec:
  dataDirHostPath: /var/lib/rook
  storage:
    useAllNodes: true
    useAllDevices: false
    storeConfig:
      storeType: bluestore
      databaseSizeMB: 1024
      journalSizeMB: 1024

Create the cluster:

kubectl create -f rook-cluster.yaml


Use kubectl to list pods in the rook namespace. You should be able to see the following pods once they are all running:

$ kubectl -n rook get pod
NAME                              READY     STATUS    RESTARTS   AGE
rook-ceph-mgr0-1279756402-wc4vt   1/1       Running   0          5m
rook-ceph-mon0-jflt5              1/1       Running   0          6m
rook-ceph-mon1-wkc8p              1/1       Running   0          6m
rook-ceph-mon2-p31dj              1/1       Running   0          6m
rook-ceph-osd-0h6nb               1/1       Running   0          5m

Monitoring Your Rook Cluster

A glimpse into setting up Prometheus for monitoring Rook: https://rook.github.io/docs/rook/master/monitoring.html

Advanced Configuration

Advanced Configuration options are also documented here: https://rook.github.io/docs/rook/master/advanced-configuration.html

Debugging

For common issues, see https://github.com/rook/rook/blob/master/Documentation/common-issues.md

For more help debugging, see https://github.com/rook/rook/blob/master/Documentation/toolbox.md

Cluster Teardown

See https://rook.github.io/docs/rook/master/teardown.html for thorough steps on destroying / cleaning up your Rook cluster

Components

Rook runs a number of smaller microservices that run on different nodes in your Kubernetes cluster:

  • The Rook Operator + API
  • Ceph Managers / Monitors / OSDs
  • Rook Agents

The Rook Operator

The Rook operator is a simple container that has all that is needed to bootstrap and monitor the storage cluster. 

The operator will start and monitor ceph monitor pods and a daemonset for the Object Storage Devices (OSDs).

The operator manages custom resource definitions (CRDs) for pools, object stores (S3/Swift), and file systems by initializing the pods and other artifacts necessary to run the services.

The operator will monitor the storage daemons to ensure the cluster is healthy.

The operator will also watch for desired state changes requested by the api service and apply the changes.

The Rook operator also creates the Rook agents as a daemonset, which runs a pod on each node. 

Ceph Managers / Monitors / OSDs

The operator will start and monitor ceph monitor pods and a daemonset for the OSDs, which provides basic Reliable Autonomic Distributed Object Store (RADOS) storage

The operator will monitor the storage daemons to ensure the cluster is healthy.

Ceph monitors (aka "Ceph mons") will be started or failed over when necessary, and other adjustments are made as the cluster grows or shrinks. 

Rook Agents

Each agent is a pod deployed on a different Kubernetes node, which configures a Flexvolume plugin that integrates with Kubernetes’ volume controller framework.

All storage operations required on the node are handled such as attaching network storage devices, mounting volumes, and formatting the filesystem.

Storage

Rook provides three types of storage to the Kubernetes cluster:

  • Block Storage: Mount storage to a single pod
  • Object Storage: Expose an S3 API to the storage cluster for applications to put and get data that is accessible from inside or outside the Kubernetes cluster
  • Shared File System: Mount a file system that can be shared across multiple pods

Custom Resource Definitions

Rook also allows you to create and manage your storage cluster through custom resource definitions (CRDs). Each type of resource has its own CRD defined.

  • Cluster: A Rook cluster provides the basis of the storage platform to serve block, object stores, and shared file systems.
  • Pool: A pool manages the backing store for a block store. Pools are also used internally by object and file stores.
  • Object Store: An object store exposes storage with an S3-compatible interface.
  • File System: A file system provides shared storage for multiple Kubernetes pods.

Shared Storage Example

Shamelessly stolen from https://rook.github.io/docs/rook/master/filesystem.html

Prerequisites

This guide assumes you have created a Rook cluster as explained in the main Kubernetes guide

Multiple File Systems Not Supported

By default only one shared file system can be created with Rook. Multiple file system support in Ceph is still considered experimental and can be enabled with the environment variable ROOK_ALLOW_MULTIPLE_FILESYSTEMS defined in rook-operator.yaml.

Please refer to cephfs experimental features page for more information.

Create the File System

Create the file system by specifying the desired settings for the metadata pool, data pools, and metadata server in the FilesystemCRD. In this example we create the metadata pool with replication of three and a single data pool with erasure coding. For more options, see the documentation on creating shared file systems.

Save this shared file system definition as rook-filesystem.yaml:

apiVersion: rook.io/v1alpha1
kind: Filesystem
metadata:
  name: myfs
  namespace: rook
spec:
  metadataPool:
    replicated:
      size: 3
  dataPools:
    - erasureCoded:
       dataChunks: 2
       codingChunks: 1
  metadataServer:
    activeCount: 1
    activeStandby: true

The Rook operator will create all the pools and other resources necessary to start the service. This may take a minute to complete.

# Create the file system
$ kubectl create -f rook-filesystem.yaml


# To confirm the file system is configured, wait for the mds pods to start
$ kubectl -n rook get pod -l app=rook-ceph-mds
NAME                                      READY     STATUS    RESTARTS   AGE
rook-ceph-mds-myfs-7d59fdfcf4-h8kw9       1/1       Running   0          12s
rook-ceph-mds-myfs-7d59fdfcf4-kgkjp       1/1       Running   0          12s

In this example, there is one active instance of MDS which is up, with one MDS instance in standby-replay mode in case of failover:To see detailed status of the file system, start and connect to the Rook toolbox. A new line will be shown with ceph status for the mds service.

$ ceph status                                                                                                                                              
  ...
  services:
    mds: myfs-1/1/1 up {[myfs:0]=mzw58b=up:active}, 1 up:standby-replay

Consume the Shared File System: K8s Registry Sample

As an example, we will start the kube-registry pod with the shared file system as the backing store. Save the following spec as kube-registry.yaml:

apiVersion: v1
kind: Service
metadata:
  name: deployment-demo
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    demo: deployment
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: deployment-demo
spec:
  selector:
    matchLabels:
      demo: deployment
  replicas: 2
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      labels:
        demo: deployment
        version: v1
    spec:
      containers:
      - name: busybox
        image: busybox
        command: [ "sh", "-c", "while true; do echo $(hostname) v1 > /data/index.html; sleep 60; done" ]
        volumeMounts:
        - name: content
          mountPath: /data
      - name: nginx
        image: nginx
        volumeMounts:
          - name: content
            mountPath: /usr/share/nginx/html
            readOnly: true
      volumes:
      - name: content
        flexVolume:
          driver: rook.io/rook
          fsType: ceph
          options:
            fsName: myfs # name of the filesystem specified in the filesystem CRD.
            clusterNamespace: rook # namespace where the Rook cluster is deployed
            clusterName: rook # namespace where the Rook cluster is deployed
            # by default the path is /, but you can override and mount a specific path of the filesystem by using the path attribute
            # path: /some/path/inside/cephfs

NOTE: I had to explicitly specify clusterName in the YAML above... newer versions of Rook will fallback to clusterNamespace

Kernel Version Requirement

If the Rook cluster has more than one filesystem and the application pod is scheduled to a node with kernel version older than 4.7, inconsistent results may arise since kernels older than 4.7 do not support specifying filesystem namespaces.

Testing Shared Storage

After creating our above example, we should now have 2 pods each with 2 containers running on 2 separate nodes:

ubuntu@mldev-master:~$ kubectl get pods -o wide
NAME                              READY     STATUS    RESTARTS   AGE       IP            NODE
deployment-demo-c59b896c8-8jph6   2/2       Running   0          33m       10.244.1.6    mldev-storage0
deployment-demo-c59b896c8-fnp49   2/2       Running   0          33m       10.244.3.7    mldev-worker0
rook-agent-8c74k                  1/1       Running   0          4h        192.168.0.3   mldev-storage0
rook-agent-bl5sr                  1/1       Running   0          4h        192.168.0.4   mldev-worker0
rook-agent-clxfl                  1/1       Running   0          4h        192.168.0.5   mldev-storage1
rook-agent-gll69                  1/1       Running   0          4h        192.168.0.6   mldev-master
rook-operator-7db5d7b9b8-svmfk    1/1       Running   0          4h        10.244.0.5    mldev-master

The nginx containers mount the shared filesystem read-only into /usr/share/nginx/html/

The busybox containers mount the shared filesystem read-write into /data/

To test that everything is working, we can exec into one busybox container and modify a file:

# Exec into one of our busybox containers in a pod
ubuntu@mldev-master:~$ kubectl exec -it deployment-demo-c59b896c8-fnp49 -c busybox -- sh


# Create a new file
/ # ls -al /data/
total 4
drwxr-xr-x    1 root     root             0 May 22 21:28 .
drwxr-xr-x    1 root     root          4096 May 22 21:26 ..
-rw-r--r--    1 root     root            35 May 22 22:03 index.html
/ # echo '<div>Hello, World!</div>' > /data/testing.html


# Ensure that the file was created successfully
/ # cat /data/testing.html 
<div>Hello, World!</div>

Make sure that this appeared in the nginx container within the same pod:

# Exec into one of the same pod's nginx container (as a sanity check)
ubuntu@mldev-master:~$ kubectl exec -it deployment-demo-c59b896c8-fnp49 -c nginx -- sh


# Ensure that the file we created previously exists
/ # ls -al /usr/share/nginx/html/
total 5
drwxr-xr-x    1 root     root             0 May 22 21:28 .
drwxr-xr-x    1 root     root          4096 May 22 21:26 ..
-rw-r--r--    1 root     root            35 May 22 22:03 index.html
-rw-r--r--    1 root     root            25 May 22 21:28 testing.html


# Ensure that the file has the correct contents
/ # cat /usr/share/nginx/html/testing.html 
<div>Hello, World!</div>

Perform the same steps in both containers in the other pod:

# Verify the same file contents in both containers of the other pod (on another node)
ubuntu@mldev-master:~$ kubectl exec -it deployment-demo-c59b896c8-8jph6 -c busybox -- cat /data/testing.html 
<div>Hello, World!</div>
ubuntu@mldev-master:~$ kubectl exec -it deployment-demo-c59b896c8-8jph6 -c nginx -- cat /usr/share/nginx/html/testing.html 
<div>Hello, World!</div>

If all of the file contents match, then congratulations!

You have just set up your first shared filesystem under Rook!

Under the Hood

For more information on the low-level processes involved in the above example, see https://github.com/rook/rook/blob/master/design/filesystem.md

After running our above example, we can SSH into the storage0 and worker0 nodes to get a better sense of where Rook stores its data.

Our current confusion seems to come from some hard-coded values in Zonca's deployment of rook-cluster.yaml:

apiVersion: v1
kind: Namespace
metadata:
  name: rook
---
apiVersion: rook.io/v1alpha1
kind: Cluster
metadata:
  name: rook
  namespace: rook
spec:
  versionTag: v0.6.2
  dataDirHostPath: /var/lib/rook    <------ this seems to be important
  storage:
    useAllNodes: true
    useAllDevices: false
    storeConfig:
      storeType: bluestore
      databaseSizeMB: 1024
      journalSizeMB: 1024
    directories:
    - path: "/vol_b"

The dataDirHostPath above is the same one mentioned in the quick start and at the top of this document - this seems to tell Rook where on the host it should persist its configuration.

The directories section is supposed to list the paths that will be included in the storage cluster. (Note that using two directories on the same physical device can cause a negative performance impact.)

Investigating Storage directories

Checking the logs for one of the Rook agents, we can see a success message shows us where the data really lives:

# Check logs of rook-agent running on mldev-storage0
ubuntu@mldev-master:~$ kubectl logs -f rook-agent-8c74k
2018-05-22 17:37:46.340923 I | rook: starting Rook v0.6.2 with arguments '/usr/local/bin/rook agent'
2018-05-22 17:37:46.340990 I | rook: flag values: --help=false, --log-level=INFO
2018-05-22 17:37:46.341634 I | rook: starting rook agent
2018-05-22 17:37:47.963857 I | exec: Running command: modinfo -F parm rbd
2018-05-22 17:37:48.451205 I | exec: Running command: modprobe rbd single_major=Y
2018-05-22 17:37:48.945393 I | rook-flexvolume: Rook Flexvolume configured
2018-05-22 17:37:48.945572 I | rook-flexvolume: Listening on unix socket for Kubernetes volume attach commands.
2018-05-22 17:37:48.947679 I | opkit: start watching cluster resource in all namespaces at v1alpha1
2018-05-22 21:03:02.504533 I | rook-flexdriver: mounting ceph filesystem myfs on /var/lib/kubelet/pods/83ad3107-5e03-11e8-b20b-fa163e9f32d5/volumes/rook.io~rook/content
2018-05-22 21:03:02.589501 I | rook-flexdriver: mounting ceph filesystem myfs on /var/lib/kubelet/pods/83b0ed53-5e03-11e8-b20b-fa163e9f32d5/volumes/rook.io~rook/content
2018-05-22 21:03:03.084507 I | rook-flexdriver: mounting ceph filesystem myfs on /var/lib/kubelet/pods/83ad3107-5e03-11e8-b20b-fa163e9f32d5/volumes/rook.io~rook/content
2018-05-22 21:03:03.196658 I | rook-flexdriver: mounting ceph filesystem myfs on /var/lib/kubelet/pods/83b0ed53-5e03-11e8-b20b-fa163e9f32d5/volumes/rook.io~rook/content
2018-05-22 21:03:04.188182 I | rook-flexdriver: mounting ceph filesystem myfs on /var/lib/kubelet/pods/83ad3107-5e03-11e8-b20b-fa163e9f32d5/volumes/rook.io~rook/content
...   ...   ...   ...   ...   ...   ...   ...   ...   ...
2018-05-22 21:26:46.887549 I | cephmon: parsing mon endpoints: rook-ceph-mon0=10.101.163.226:6790,rook-ceph-mon1=10.107.75.148:6790,rook-ceph-mon2=10.101.87.159:6790
2018-05-22 21:26:46.887596 I | op-mon: loaded: maxMonID=2, mons=map[rook-ceph-mon0:0xc42028e980 rook-ceph-mon1:0xc42028e9c0 rook-ceph-mon2:0xc42028ea20]
2018-05-22 21:26:46.890128 I | rook-flexdriver: WARNING: The node kernel version is 4.4.0-31-generic, which do not support multiple ceph filesystems. The kernel version has to be at least 4.7. If you have multiple ceph filesystems, the result could be inconsistent
2018-05-22 21:26:46.890236 I | rook-flexdriver: mounting ceph filesystem myfs on 10.101.163.226:6790,10.107.75.148:6790,10.101.87.159:6790:/ to /var/lib/kubelet/pods/d40d5673-5e06-11e8-b20b-fa163e9f32d5/volumes/rook.io~rook/content
2018-05-22 21:26:49.446669 I | rook-flexdriver: 
2018-05-22 21:26:49.446832 I | rook-flexdriver: ceph filesystem myfs has been attached and mounted


# SSH into mldev-storage0, elevate privileges, and check the specified sub-folder
ubuntu@mldev-master:~$ ssh ubuntu@192.168.0.3
ubuntu@mldev-storage0:~$ sudo su
root@mldev-storage0:/home/ubuntu# ls -al /var/lib/kubelet/pods/d40d5673-5e06-11e8-b20b-fa163e9f32d5/volumes/rook.io~rook/content
total 5
drwxr-xr-x 1 root root    0 May 22 21:28 .
drwxr-x--- 3 root root 4096 May 22 21:26 ..
-rw-r--r-- 1 root root   35 May 22 22:33 index.html
-rw-r--r-- 1 root root   25 May 22 21:28 testing.html

Obviously this is not where we want the shared filesystem data stored long-term, so I'll need to figure out why these files are persisted into /var/lib/kubelet and not into the directories specified in the Cluster configuration.

Digging Deeper into dataDirHostPath

Checking /var/lib/rook directory, we see a few sub-directories:

root@mldev-master:/var/lib/rook# ls -al
total 20
drwxr-xr-x  5 root root 4096 May 22 17:40 .
drwxr-xr-x 46 root root 4096 May 22 17:38 ..
drwxr--r--  3 root root 4096 May 22 17:41 osd1
drwxr--r--  2 root root 4096 May 22 17:40 rook
drwxr--r--  3 root root 4096 May 22 17:38 rook-ceph-mon0


root@mldev-master:/var/lib/rook# ls -al rook
total 16
drwxr--r-- 2 root root 4096 May 22 17:40 .
drwxr-xr-x 5 root root 4096 May 22 17:40 ..
-rw-r--r-- 1 root root  168 May 22 17:40 client.admin.keyring
-rw-r--r-- 1 root root 1123 May 22 17:40 rook.config

root@mldev-master:/var/lib/rook# ls -al rook-ceph-mon0/
total 24
drwxr--r-- 3 root root 4096 May 22 17:38 .
drwxr-xr-x 5 root root 4096 May 22 17:40 ..
drwxr--r-- 3 root root 4096 May 22 17:38 data
-rw-r--r-- 1 root root  248 May 22 17:38 keyring
-rw-r--r-- 1 root root  210 May 22 17:38 monmap
-rw-r--r-- 1 root root 1072 May 22 17:38 rook.config
srwxr-xr-x 1 root root    0 May 22 17:38 rook-mon.rook-ceph-mon0.asok


root@mldev-master:/var/lib/rook# ls -al osd1
total 7896
drwxr--r-- 3 root root        4096 May 22 17:41 .
drwxr-xr-x 5 root root        4096 May 22 17:40 ..
lrwxrwxrwx 1 root root          34 May 22 17:40 block -> /var/lib/rook/osd1/bluestore-block
lrwxrwxrwx 1 root root          31 May 22 17:40 block.db -> /var/lib/rook/osd1/bluestore-db
lrwxrwxrwx 1 root root          32 May 22 17:40 block.wal -> /var/lib/rook/osd1/bluestore-wal
-rw-r--r-- 1 root root           2 May 22 17:40 bluefs
-rw-r--r-- 1 root root 74883726950 May 22 22:13 bluestore-block
-rw-r--r-- 1 root root  1073741824 May 22 17:41 bluestore-db
-rw-r--r-- 1 root root   603979776 May 22 22:14 bluestore-wal
-rw-r--r-- 1 root root          37 May 22 17:41 ceph_fsid
-rw-r--r-- 1 root root          37 May 22 17:40 fsid
-rw-r--r-- 1 root root          56 May 22 17:40 keyring
-rw-r--r-- 1 root root           8 May 22 17:40 kv_backend
-rw-r--r-- 1 root root          21 May 22 17:41 magic
-rw-r--r-- 1 root root           4 May 22 17:40 mkfs_done
-rw-r--r-- 1 root root           6 May 22 17:41 ready
-rw-r--r-- 1 root root        1693 May 22 17:40 rook.config
srwxr-xr-x 1 root root           0 May 22 17:41 rook-osd.1.asok
drwxr--r-- 2 root root        4096 May 22 17:40 tmp
-rw-r--r-- 1 root root          10 May 22 17:40 type
-rw-r--r-- 1 root root           2 May 22 17:41 whoami

As you can see, these metadata files do not appear to be readable on disk and would likely need to be un-mangled by Rook to properly perform a backup.

Checking the kubelet logs...

Digging into the systemctl logs for kubectl, we can see it's complaining about the volume configuration:

ubuntu@mldev-master:~$ ssh ubuntu@192.168.0.4
ubuntu@mldev-worker0:~$ sudo systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Tue 2018-05-22 17:36:36 UTC; 21h ago
     Docs: http://kubernetes.io/docs/
 Main PID: 4607 (kubelet)
    Tasks: 18
   Memory: 50.8M
      CPU: 27min 25.718s
   CGroup: /system.slice/kubelet.service
           └─4607 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin

May 23 15:00:57 mldev-worker0 kubelet[4607]: E0523 15:00:57.308626    4607 reconciler.go:376] Could not construct volume information: no volume plugin matched
May 23 15:03:57 mldev-worker0 kubelet[4607]: E0523 15:03:57.344832    4607 reconciler.go:376] Could not construct volume information: no volume plugin matched
May 23 15:06:57 mldev-worker0 kubelet[4607]: E0523 15:06:57.376002    4607 reconciler.go:376] Could not construct volume information: no volume plugin matched
May 23 15:09:57 mldev-worker0 kubelet[4607]: E0523 15:09:57.424020    4607 reconciler.go:376] Could not construct volume information: no volume plugin matched
May 23 15:12:57 mldev-worker0 kubelet[4607]: E0523 15:12:57.476398    4607 reconciler.go:376] Could not construct volume information: no volume plugin matched
May 23 15:15:57 mldev-worker0 kubelet[4607]: E0523 15:15:57.511746    4607 reconciler.go:376] Could not construct volume information: no volume plugin matched
May 23 15:18:57 mldev-worker0 kubelet[4607]: E0523 15:18:57.545539    4607 reconciler.go:376] Could not construct volume information: no volume plugin matched
May 23 15:21:57 mldev-worker0 kubelet[4607]: E0523 15:21:57.602083    4607 reconciler.go:376] Could not construct volume information: no volume plugin matched
May 23 15:24:57 mldev-worker0 kubelet[4607]: E0523 15:24:57.660774    4607 reconciler.go:376] Could not construct volume information: no volume plugin matched
May 23 15:27:57 mldev-worker0 kubelet[4607]: E0523 15:27:57.717357    4607 reconciler.go:376] Could not construct volume information: no volume plugin matched

This is likely a reference to the Flex Volume Configuration steps that I breezed over, assuming they had been handled already by Terraform.

Examining our kubelet configuration on the worker node, we see that there is no reference to the --volume-plugin-dir mentioned in the Rook prerequisites:

root@mldev-worker0:/etc/systemd/system/kubelet.service.d# cat 10-kubeadm.conf 
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS

The following GitHub issues appear to confirm that the error message above seems to stem from a misconfigured --volume-plugin-dir:

  • No labels