The usual patterns should work here:
2049
, 20048
, and 111
firewall using OpenStack security groups volumes:
- name: nfs
nfs:
server: <NFS_SERVER_IP>
path: /
Following this example, I was able to easily get an NFS server pod running within a Kubernetes 1.9 cluster.
The original example was intended for use with GCE and included some nice features like backing your NFS server with PVCs.
The StorageClasses are specific to GCE, so we do not need to include those. Instead, we rely on our Terraform process mounting an external volume to the storage nodes.
Modifying the above example slightly to work with our Terraform/hostPath process, we have the following set of YAMLs:
kind: Service apiVersion: v1 metadata: name: nfs-server spec: ports: - name: nfs port: 2049 - name: mountd port: 20048 - name: rpcbind port: 111 selector: role: nfs-server --- apiVersion: v1 kind: ReplicationController metadata: name: nfs-server spec: replicas: 1 selector: role: nfs-server template: metadata: labels: role: nfs-server spec: nodeSelector: external-storage: "true" containers: - name: nfs-server image: gcr.io/google_containers/volume-nfs:0.8 ports: - name: nfs containerPort: 2049 - name: mountd containerPort: 20048 - name: rpcbind containerPort: 111 securityContext: privileged: true volumeMounts: - mountPath: /exports name: nfs-export-fast volumes: - name: nfs-export-fast hostPath: path: /data/nfs |
The following example Pod consumes our in-cluster NFS export.
Note that the server
field is populated with the Kubernetes Service IP (e.g. kubectl get svc
):
apiVersion: v1 kind: Pod metadata: name: web spec: containers: - name: web image: nginx volumeMounts: # name must match the volume name below - name: nfs mountPath: "/usr/share/nginx/html/" ports: - name: web containerPort: 80 protocol: TCP volumes: - name: nfs nfs: # FIXME: use the right name #server: nfs-server.default.kube.local server: "10.101.9.169" path: "/" readOnly: false |