Summary
Overview
This course session provides a hands-on, step-by-step guide to setting up a Kubernetes cluster from scratch, including installation of container runtime (containerd), control plane components (kubeadm, kubelet, kubectl), configuration of master and worker nodes, joining nodes to the cluster, applying a CNI network plugin (Weave Net), and managing resources using namespaces and resource quotas. The session includes extensive troubleshooting of common setup errors, use of aliases for command efficiency, and practical exercises on creating and inspecting pods and namespaces. The goal is to equip learners with end-to-end cluster deployment skills and foundational Kubernetes resource management.
Topic (Timeline)
1. Initial Setup and Node Access [00:00:01 - 00:02:09]
- Learners are guided to exit current SSH sessions and log into a larger machine via SSH using a key file (
ssh -i ip.pem username@ip). - Navigates to the
/home/student/dm_detailsdirectory to access lab materials. - Emphasizes correct SSH connection to the target machine with proper key and IP.
2. Master Node Preparation: Package Installation and Script Setup [00:02:30 - 00:09:24]
- Instructions to run
sudo su -to elevate privileges. - Learners are directed to Module 2 Labs to access Lab 1 documentation.
- System updates (
apt-get update) and installation of required utilities (e.g.,curl,gnupg,lsb-release) to enable Kubernetes repository access. - Creation of a shell script (
*.sh) to automate installation of containerd, kubeadm, kubelet, and kubectl. - Script content includes: disabling swap, adding Docker’s GPG key, configuring the Kubernetes APT repo, and installing packages.
- Script permissions are set using
chmod 700 script.shto allow execution.
3. Component Installation and Version Verification [00:09:28 - 00:13:15]
- Script execution (
./script.sh) initiates installation of containerd, kubeadm, and kubelet. - Post-installation, versions are verified using:
containerd --versionkubeadm versionkubectl version
- Learners are reminded that kubectl is not installed on worker nodes.
- Hostname configuration using
hostnamectl set-hostname master(on master) andworker-one,worker-two(on workers).
4. Worker Node Configuration and Synchronization [00:13:18 - 00:22:32]
- Replication of master node setup on worker nodes: SSH into each worker, set hostname, run the same script, and verify component versions.
- Troubleshooting of failed joins due to incorrect hostname settings or residual state.
- Emphasis on consistency: all nodes must have correct hostnames and identical software versions.
5. Cluster Initialization and Network Plugin Application [00:23:10 - 00:29:45]
- On the master node,
kubeadm initis executed to initialize the control plane. - Output includes a join token and command for worker nodes — learners are instructed to copy the last two lines to a notepad.
- Post-init, learners run three setup commands to configure kubectl for the user:
mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config
- Weave Net CNI plugin is applied using
kubectl apply -f https://.../weave-net.yaml.
6. Worker Node Joining and Cluster Validation [00:30:02 - 00:31:32]
- Worker nodes execute the join command using the token from the master.
kubectl get nodeson the master confirms all nodes are in “Ready” state.- Troubleshooting of failed joins due to prior misconfigurations, including resetting nodes with
kubeadm reset --force.
7. Troubleshooting and Cluster Reset [00:31:32 - 00:55:35]
- Repeated failures on worker nodes (e.g., CrashLoopBackOff) lead to full cluster reset:
kubeadm reset --forceon master and all workers.- Re-running
kubeadm initon master. - Re-applying Weave Net.
- Re-joining workers with new token.
- Learners are warned against running
kubeadm joinon the master node. - Use of
kubectl get nodesandkubectl get pods -Ato validate cluster health.
8. Pod Creation and Basic kubectl Operations [00:55:35 - 01:04:41]
- Creation of test pods using
kubectl run example --image=nginx. - Use of
kubectl get pods -o wideto view pod status, IP, and node assignment. - Debugging with
kubectl describe pod <pod-name>to identify issues (e.g., image pull errors, resource constraints). - Introduction of
alias k='kubectl'to simplify command usage.
9. Kubernetes Objects and API Exploration [01:04:41 - 01:11:34]
- Exploration of Kubernetes API resources using
kubectl api-resources. - Use of
kubectl explain <resource>(e.g.,kubectl explain pod) to understand object structure (apiVersion, kind, metadata, spec). - Viewing pod YAML definitions via
kubectl get pod <name> -o yamlto learn declarative syntax.
10. Namespaces: Purpose and Management [01:11:34 - 01:18:10]
- Explanation of namespaces as logical partitions for isolating resources (e.g., team/project separation).
- Default namespaces:
default,kube-system,kube-public. - Creating namespaces imperatively:
kubectl create ns <name>or declaratively via YAML. - YAML structure for namespace:
apiVersion: v1 kind: Namespace metadata: name: test - Viewing resources per namespace:
kubectl get pods -n <namespace>. - Deleting resources across namespaces:
kubectl delete pods --all -n <namespace>.
11. Resource Quotas: Limiting Cluster Usage [01:18:15 - 01:26:45]
- Introduction to ResourceQuota to limit CPU, memory, and object counts per namespace.
- Example: Limiting a namespace to 400Mi memory and 2 pods.
- Creating a ResourceQuota YAML:
apiVersion: v1 kind: ResourceQuota metadata: name: compute-quota namespace: test spec: hard: requests.memory: "400Mi" limits.memory: "400Mi" pods: "2" - Applying with
kubectl apply -f quota.yaml -n test. - Testing quota enforcement: attempting to create a third pod triggers an error.
12. Pod Architecture and Multi-Container Design [01:40:58 - 01:44:42]
- Explanation of pods as the smallest deployable unit, containing one or more containers.
- Use cases for multi-container pods:
- Sidecar: log collector, monitoring agent, or proxy (e.g., nginx as reverse proxy).
- Adapter: transforming data between containers.
- Ambassador: external service proxy.
- Emphasis: Do not co-locate database and web app in the same pod — violates separation of concerns and access control.
- Containers in a pod share network namespace (same IP) and storage volumes.
13. Pod Lab: Creation, Inspection, and ID Extraction [01:44:42 - 01:48:15]
- Learners create pods using different images (nginx, httpd).
- Extract pod details using:
kubectl get pods -o yamlkubectl get pod <name> -o jsonpath='{.metadata.uid}'to retrieve unique pod ID.
- Practice modifying pod names and inspecting YAML output to understand structure.
14. Cleanup and Final Validation [01:48:15 - 01:55:37]
- Deletion of test pods and namespaces to free resources and avoid conflicts.
- Verification that all pods in
kube-systemare running (e.g., CoreDNS, kube-proxy). - Final check:
kubectl get nodes -o wideandkubectl get pods -Ato confirm cluster stability. - Break announced at 01:48:15, with return expected at 13:30.
Appendix
Key Principles
- Single Responsibility: Each pod should ideally host one primary application; auxiliary functions (logging, proxying) use sidecar containers.
- Namespace Isolation: Use namespaces to separate teams, environments, or projects to prevent resource conflicts.
- Resource Quotas: Enforce limits on CPU, memory, and object counts in production to prevent resource exhaustion.
- Immutable Infrastructure: Avoid manual changes on nodes; use declarative YAML and automation scripts.
Tools Used
kubeadm: Cluster bootstrapping tool.kubectl: Command-line interface for Kubernetes.containerd: Container runtime.Weave Net: CNI plugin for pod networking.hostnamectl: System tool to set hostnames.vi/nano: Text editors for creating YAML scripts.alias: Shell shortcut forkubectl→k.
Common Pitfalls
- Running
kubeadm joinon the master node. - Not disabling swap before installation (
swapoff -aand comment out in/etc/fstab). - Incorrect hostname format (e.g., using underscores or special characters).
- Forgetting to copy
admin.confto user’s.kube/configafterkubeadm init. - Using
:instead of=in YAML (e.g.,image: nginxnotimage=nginxinkubectl run). - Not cleaning up old pods/namespaces before re-running labs, leading to quota or IP conflicts.
Practice Suggestions
- Rebuild the cluster from scratch 3 times to internalize the flow.
- Create a custom YAML for a pod with 2 containers (e.g., nginx + logspout).
- Write a script to automate node setup (hostname, install, join).
- Experiment with ResourceQuota limits and observe rejection messages.
- Use
kubectl explainextensively to explore any resource type (e.g.,Service,Deployment,ConfigMap).