Date: Fri, 29 Mar 2024 03:26:36 -0500 (CDT) Message-ID: <577431153.308.1711700796732@os-confluence.ncsa.illinois.edu> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_307_1280993658.1711700796731" ------=_Part_307_1280993658.1711700796731 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html
This page is a placeholder for information and processes surroun= ding Kubernetes (K8).
A quick-start guide can be found here: https://github.com/kubernete= s/kubernetes/blob/release-1.1/docs/getting-started-guides/docker.md/
The following set of commands can be used to install kubectl on your mac= hine. You may need to change the version number below:
mkdir /home/core/kubectl
cd /home/core/kubectl
curl -L "https://storage.googleapis.com/kubernetes-release/release/v1.1.3/bin/= linux/amd64/kubectl" > .
export PATH=3D$PATH:`pwd`
chmod +x kubectl
Execute the following command to obtain the NDS Labs startup scripts= :
git clo= ne https://github.com/craig-willis/ndslabs-startup.git cd ndslabs-startup/
Execute kube-up.sh to start Kubernetes on your single-node setup:
./kube-= up.sh
Modify ndslabs/apiserver.yaml and ndslabs/g= ui.yaml so that it reflects the addresses in your cluster.
Once Kubernetes is up and running, you can run ./ndslabs-up.sh to br= ing up the GUI and API server:
./ndsla= bs-up.sh
Run the Ansible de= ployment playbook(s) to provision all necessary machines and volume(s).
Sometimes, worker nodes will enter a "NotReady" state, which will preven= t Kubernetes from scheduling pods on the node.
core@la= mbert-master1 ~ $ kubectl get nodes NAME STATUS AGE lambert-gfs1 Ready 2d lambert-loadbal Ready 2d lambert-node1 NotReady 2d
To discover the reason behind the error, you can use kubectl des= cribe:
core@la= mbert-master1 ~ $ kubectl describe node lambert-node1 Name: lambert-node1 Labels: kubernetes.io/hostname=3Dlambert-node1,ndslabs-node= -role=3Dcompute CreationTimestamp: Fri, 27 May 2016 20:31:53 +0000 Phase: Conditions: Type Status LastHeartbeatTime Las= tTransitionTime Reason Message ---- ------ ----------------- ---= --------------- ------ ------- OutOfDisk Unknown Sat, 28 May 2016 12:56:01 +0000 Sat= , 28 May 2016 12:56:41 +0000 NodeStatusUnknown Kubelet stoppe= d posting node status. Ready Unknown Sat, 28 May 2016 12:56:01 +0000 Sat= , 28 May 2016 12:56:41 +0000 NodeStatusUnknown Kubelet stoppe= d posting node status. Addresses: 192.168.100.253,192.168.100.253 Capacity: cpu: 2 memory: 4051772Ki pods: 110 System Info: Machine ID: ef45bc8b101a4ac987d295ac7886e358 System UUID: EF45BC8B-101A-4AC9-87D2-95AC7886E358 Boot ID: 6a95eea9-99c2-42f8-abe1-0f57545eee7a Kernel Version: 4.3.6-coreos OS Image: CoreOS 899.11.0 Container Runtime Version: docker://1.9.1 Kubelet Version: v1.2.4 Kube-Proxy Version: v1.2.4 ExternalID: lambert-node1 Non-terminated Pods: (1 in total) Namespace Name CPU Request= s CPU Limits Memory Requests Memory Limits --------- ---- -----------= - ---------- --------------- ------------- default glusterfs-client-8ets6 0 (0%) = 0 (0%) 0 (0%) 0 (0%) Allocated resources: (Total limits may be over 100%, i.e., overcommitted. More info: http://re= leases.k8s.io/HEAD/docs/user-guide/compute-resources.md) CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 0 (0%) 0 (0%) 0 (0%) 0 (0%) Events: FirstSeen LastSeen Count From SubobjectPa= th Type Reason Message --------- -------- ----- ---- -----------= -- -------- ------ ------- 2d 7s 19526 {controllermanager } = Normal DeletingAllPods Node lambert-node1 event: Deleting all= Pods from Node lambert-node1.
We can see from the verbose output above (under "Conditions") that the n= ode seems to have run out of disk space.
In this case, I cleared out several large Docker images with
After freeing up some disk space, we can SSH into the worker node and re= start the kubelet there to being it back into the cluster:
core@la= mbert-deploy-tools ~ $ sudo ssh -i ~/private/lambert-test.pem core@192.168.= 100.253 core@lambert-node1 ~ $ sudo systemctl restart kubelet core@lambert-node1 ~ $ exit
Back on the master, you should see shortly that the node has righted its= elf:
core@la= mbert-master1 ~ $ kubectl describe node lambert-node1 Name: lambert-node1 Labels: kubernetes.io/hostname=3Dlambert-node1,ndslabs-node= -role=3Dcompute CreationTimestamp: Fri, 27 May 2016 20:31:53 +0000 Phase: Conditions: Type Status LastHeartbeatTime LastTransit= ionTime Reason Message ---- ------ ----------------- -----------= ------- ------ ------- OutOfDisk False Mon, 30 May 2016 19:20:45 +0000 Mon, 30 May= 2016 19:20:35 +0000 KubeletHasSufficientDisk kubelet has su= fficient disk space available Ready True Mon, 30 May 2016 19:20:45 +0000 Mon, 30 May= 2016 19:20:45 +0000 KubeletReady kubelet is pos= ting ready status Addresses: 192.168.100.253,192.168.100.253 Capacity: cpu: 2 memory: 4051772Ki pods: 110 System Info: Machine ID: ef45bc8b101a4ac987d295ac7886e358 System UUID: EF45BC8B-101A-4AC9-87D2-95AC7886E358 Boot ID: 6a95eea9-99c2-42f8-abe1-0f57545eee7a Kernel Version: 4.3.6-coreos OS Image: CoreOS 899.11.0 Container Runtime Version: docker://1.9.1 Kubelet Version: v1.2.4 Kube-Proxy Version: v1.2.4 ExternalID: lambert-node1 Non-terminated Pods: (2 in total) Namespace Name = CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- = ------------ ---------- --------------- ------------- default glusterfs-client-8ets6 = 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system fluentd-elasticsearch-lambert-node1 = 100m (5%) 0 (0%) 200Mi (5%) 200Mi (5%) Allocated resources: (Total limits may be over 100%, i.e., overcommitted. More info: http://re= leases.k8s.io/HEAD/docs/user-guide/compute-resources.md) CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 100m (5%) 0 (0%) 200Mi (5%) 200Mi (5%) Events: FirstSeen LastSeen Count From SubobjectPa= th Type Reason Message --------- -------- ----- ---- -----------= -- -------- ------ ------- 15s 15s 1 {kubelet lambert-node1} = Normal Starting Starting kubelet. 15s 14s 2 {kubelet lambert-node1} = Normal NodeHasSufficientDisk Node lambert-node1 status is n= ow: NodeHasSufficientDisk 14s 14s 1 {kubelet lambert-node1} = Normal NodeNotReady Node lambert-node1 status is n= ow: NodeNotReady 2d 13s 19579 {controllermanager } = Normal DeletingAllPods Node lambert-node1 event: Dele= ting all Pods from Node lambert-node1. 4s 4s 1 {kubelet lambert-node1} = Normal NodeReady Node lambert-node1 status is n= ow: NodeReady core@lambert-master1 ~ $ kubectl get nodes NAME STATUS AGE lambert-gfs1 Ready 2d lambert-loadbal Ready 2d lambert-node1 Ready 2d