Overview
Rook is an open-source distributed filesystem designed for use under Kubernetes, and is only supported on Kubernetes 1.7 or higher.
This document serves as a high-level overview or introduction to some basic concepts and patterns surrounding Rook.
All of this is explained in much better detail by their official documentation: https://rook.github.io/docs/rook/master/
Source code is also available here: https://github.com/rook/rook
Prerequisites
Minimum Version: Kubernetes v1.7 or higher is supported by Rook.
If you are using dataDirHostPath to persist rook data on kubernetes hosts, make sure your host has at least 5GB of space available on the specified path.
You will also need to set up RBAC, and ensure that the Flex volume plugin has been configured.
Setting up RBAC
On Kubernetes 1.7+, you will need to configure Rook to use RBAC appropriately.
See https://rook.github.io/docs/rook/master/rbac.html
Flexvolume Configuration
The Rook agent requires setup as a Flex volume plugin to manage the storage attachments in your cluster. See the Flex Volume Configuration topic to configure your Kubernetes deployment to load the Rook volume plugin.
Getting Started
Now that we've examined each of the pieces, let's zoom out and see what we can do with the whole cluster.
For the quickest quick start, check out the Rook QuickStart guide: https://rook.github.io/docs/rook/master/quickstart.html
Getting Started without an Existing Kubernetes cluster
The easiest way to deploy a new Kubernetes cluster with Rook support on OpenStack (Nebula / SDSC) is to use the https://github.com/nds-org/kubeadm-terraform repository.
This may work for other cloud providers as well, but has not yet been thoroughly tested.
Getting Started on an Existing Kubernetes cluster
If you’re feeling lucky, a simple Rook cluster can be created with the following kubectl commands. For the more detailed install, skip to the next section to deploy the Rook operator.
kubectl create -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/rook-operator.yaml kubectl create -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/rook-cluster.yaml
After the cluster is running, you can create block, object, or file storage to be consumed by other applications in your cluster.
For a more detailed look at the deployment process, see below.
Deploy the Rook Operator
The first step is to deploy the Rook system components, which include the Rook agent running on each node in your cluster as well as Rook operator pod.
kubectl create -f
https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/rook-operator.yaml
# verify the rook-operator and rook-agents pods are in the `Running` state before proceeding
kubectl -n rook-system get pod
You can also deploy the operator with the Rook Helm Chart.
Restart Kubelet (Kubernetes 1.7.x only)
For versions of Kubernetes prior to 1.8, the Kubelet process on all nodes will require a restart after the Rook operator and Rook agents have been deployed. As part of their initial setup, the Rook agents deploy and configure a Flexvolume plugin in order to integrate with Kubernetes’ volume controller framework. In Kubernetes v1.8+, the dynamic Flexvolume plugin discovery will find and initialize our plugin, but in older versions of Kubernetes a manual restart of the Kubelet will be required.
Create a Rook Cluster
Now that the Rook operator and agent pods are running, we can create the Rook cluster. For the cluster to survive reboots, make sure you set the dataDirHostPath
property. For more settings, see the documentation on configuring the cluster.
Save the cluster spec as rook-cluster.yaml
:
apiVersion: v1 kind: Namespace metadata: name: rook --- apiVersion: rook.io/v1alpha1 kind: Cluster metadata: name: rook namespace: rook spec: dataDirHostPath: /var/lib/rook storage: useAllNodes: true useAllDevices: false storeConfig: storeType: bluestore databaseSizeMB: 1024 journalSizeMB: 1024
Create the cluster:
kubectl create -f rook-cluster.yaml
Use kubectl
to list pods in the rook
namespace. You should be able to see the following pods once they are all running:
$ kubectl -n rook get pod NAME READY STATUS RESTARTS AGE rook-ceph-mgr0-1279756402-wc4vt 1/1 Running 0 5m rook-ceph-mon0-jflt5 1/1 Running 0 6m rook-ceph-mon1-wkc8p 1/1 Running 0 6m rook-ceph-mon2-p31dj 1/1 Running 0 6m rook-ceph-osd-0h6nb 1/1 Running 0 5m
Monitoring Your Rook Cluster
A glimpse into setting up Prometheus for monitoring Rook: https://rook.github.io/docs/rook/master/monitoring.html
Advanced Configuration
Advanced Configuration options are also documented here: https://rook.github.io/docs/rook/master/advanced-configuration.html
- Log Collection
- OSD Information
- Separate Storage Groups
- Configuring Pools
- Custom ceph.conf Settings
- OSD CRUSH Settings
- Phantom OSD Removal
Cluster Teardown
See https://rook.github.io/docs/rook/master/teardown.html for thorough steps on destroying / cleaning up your Rook cluster
Components
Rook runs a number of smaller microservices that run on different nodes in your Kubernetes cluster:
- The Rook Operator + API
- Ceph Managers / Monitors / OSDs
- Rook Agents
The Rook Operator
The Rook operator is a simple container that has all that is needed to bootstrap and monitor the storage cluster.
The operator will start and monitor ceph monitor pods and a daemonset for the Object Storage Devices (OSDs).
The operator manages custom resource definitions (CRDs) for pools, object stores (S3/Swift), and file systems by initializing the pods and other artifacts necessary to run the services.
The operator will monitor the storage daemons to ensure the cluster is healthy.
The operator will also watch for desired state changes requested by the api service and apply the changes.
The Rook operator also creates the Rook agents as a daemonset, which runs a pod on each node.
Ceph Managers / Monitors / OSDs
The operator will start and monitor ceph monitor pods and a daemonset for the OSDs, which provides basic Reliable Autonomic Distributed Object Store (RADOS) storage.
The operator will monitor the storage daemons to ensure the cluster is healthy.
Ceph monitors (aka "Ceph mons") will be started or failed over when necessary, and other adjustments are made as the cluster grows or shrinks.
Rook Agents
Each agent is a pod deployed on a different Kubernetes node, which configures a Flexvolume plugin that integrates with Kubernetes’ volume controller framework.
All storage operations required on the node are handled such as attaching network storage devices, mounting volumes, and formatting the filesystem.
Storage
Rook provides three types of storage to the Kubernetes cluster:
- Block Storage: Mount storage to a single pod
- Object Storage: Expose an S3 API to the storage cluster for applications to put and get data that is accessible from inside or outside the Kubernetes cluster
- Shared File System: Mount a file system that can be shared across multiple pods
Custom Resource Definitions
Rook also allows you to create and manage your storage cluster through custom resource definitions (CRDs). Each type of resource has its own CRD defined.
- Cluster: A Rook cluster provides the basis of the storage platform to serve block, object stores, and shared file systems.
- Pool: A pool manages the backing store for a block store. Pools are also used internally by object and file stores.
- Object Store: An object store exposes storage with an S3-compatible interface.
- File System: A file system provides shared storage for multiple Kubernetes pods.
Shared Storage Example
Shamelessly stolen from https://rook.github.io/docs/rook/master/filesystem.html
Prerequisites
This guide assumes you have created a Rook cluster as explained in the main Kubernetes guide
Multiple File Systems Not Supported
By default only one shared file system can be created with Rook. Multiple file system support in Ceph is still considered experimental and can be enabled with the environment variable ROOK_ALLOW_MULTIPLE_FILESYSTEMS
defined in rook-operator.yaml
.
Please refer to cephfs experimental features page for more information.
Create the File System
Create the file system by specifying the desired settings for the metadata pool, data pools, and metadata server in the Filesystem
CRD. In this example we create the metadata pool with replication of three and a single data pool with erasure coding. For more options, see the documentation on creating shared file systems.
Save this shared file system definition as rook-filesystem.yaml
:
apiVersion: rook.io/v1alpha1 kind: Filesystem metadata: name: myfs namespace: rook spec: metadataPool: replicated: size: 3 dataPools: - erasureCoded: dataChunks: 2 codingChunks: 1 metadataServer: activeCount: 1 activeStandby: true
The Rook operator will create all the pools and other resources necessary to start the service. This may take a minute to complete.
# Create the file system $ kubectl create -f rook-filesystem.yaml # To confirm the file system is configured, wait for the mds pods to start $ kubectl -n rook get pod -l app=rook-ceph-mds NAME READY STATUS RESTARTS AGE rook-ceph-mds-myfs-7d59fdfcf4-h8kw9 1/1 Running 0 12s rook-ceph-mds-myfs-7d59fdfcf4-kgkjp 1/1 Running 0 12s
In this example, there is one active instance of MDS which is up, with one MDS instance in standby-replay
mode in case of failover:To see detailed status of the file system, start and connect to the Rook toolbox. A new line will be shown with ceph status
for the mds
service.
$ ceph status ... services: mds: myfs-1/1/1 up {[myfs:0]=mzw58b=up:active}, 1 up:standby-replay
Consume the Shared File System: K8s Registry Sample
As an example, we will start the kube-registry pod with the shared file system as the backing store. Save the following spec as kube-registry.yaml
:
apiVersion: v1 kind: ReplicationController metadata: name: kube-registry-v0 namespace: kube-system labels: k8s-app: kube-registry version: v0 kubernetes.io/cluster-service: "true" spec: replicas: 3 selector: k8s-app: kube-registry version: v0 template: metadata: labels: k8s-app: kube-registry version: v0 kubernetes.io/cluster-service: "true" spec: containers: - name: registry image: registry:2 resources: limits: cpu: 100m memory: 100Mi env: - name: REGISTRY_HTTP_ADDR value: :5000 - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY value: /var/lib/registry volumeMounts: - name: image-store mountPath: /var/lib/registry ports: - containerPort: 5000 name: registry protocol: TCP volumes: - name: image-store flexVolume: driver: rook.io/rook fsType: ceph options: fsName: myfs # name of the filesystem specified in the filesystem CRD. clusterNamespace: rook # namespace where the Rook cluster is deployed # by default the path is /, but you can override and mount a specific path of the filesystem by using the path attribute # path: /some/path/inside/cephfs
You now have a docker registry which is HA with persistent storage.
Kernel Version Requirement
If the Rook cluster has more than one filesystem and the application pod is scheduled to a node with kernel version older than 4.7, inconsistent results may arise since kernels older than 4.7 do not support specifying filesystem namespaces.