Summary

Overview

This course module provides a hands-on lab for setting up a Kubernetes cluster from scratch, covering node configuration, installation of core components (containerd, kubeadm, kubelet), cluster initialization, worker node joining, namespace management, and resource quotas. The session includes live troubleshooting of common setup errors, use of aliases for command efficiency, and exploration of pod structure and YAML-based resource definitions. The goal is to equip learners with practical skills to deploy, manage, and secure Kubernetes environments using imperative and declarative methods.

Topic (Timeline)

1. Cluster Setup and Node Configuration [00:00:15 - 00:08:18]

  • Instructor guides learners through initializing a 3-node Kubernetes cluster (1 master, 2 workers).
  • Steps include setting hostnames (hostnamectl set-hostname master/worker1/worker2), disabling swap, and installing required utilities.
  • A shell script is created to automate installation of containerd, kubeadm, and kubelet.
  • Script permissions are set using chmod 700, then executed to install and configure components.
  • Version verification is performed for containerd, kubeadm, and kubelet using --version flags.
  • Learners are instructed to confirm successful installation before proceeding.

2. Master Node Initialization and Token Generation [00:08:21 - 00:20:27]

  • kubeadm init is executed on the master node to initialize the control plane.
  • Output includes a kubeadm join token and certificate key, which must be copied to a notepad for later use on worker nodes.
  • Post-init commands are run to configure kubectl: mkdir -p $HOME/.kube, sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config, and sudo chown $(id -u):$(id -g) $HOME/.kube/config.
  • Learners verify master readiness using kubectl get nodes (expected status: NotReady until CNI is applied).

3. Worker Node Joining and Troubleshooting [00:20:34 - 00:32:29]

  • Learners execute the kubeadm join command on worker nodes using the token from the master.
  • Common errors are addressed: incorrect hostname configuration, residual cluster state, and failed script execution.
  • Troubleshooting steps include:
    • Running kubeadm reset --force on both master and workers to clear corrupted state.
    • Re-running the installation script and re-joining with a fresh token.
    • Verifying containerd and kubelet versions after reset.
  • Instructor emphasizes correct syntax: hostnamectl set-hostname worker1 (no hyphens in command), and avoiding kubeadm join on the master.

4. CNI Network Plugin Application and Pod Creation [00:32:31 - 00:42:49]

  • The Weave Net CNI plugin is applied using kubectl apply -f https://.../weave-net.yaml.
  • Master node status changes to Ready after CNI is applied.
  • Learners create a test pod using kubectl run example --image=nginx and verify with kubectl get pods -o wide.
  • Pod creation fails for some learners due to network misconfigurations; instructor re-applies CNI and resets nodes.
  • Successful pod creation confirms cluster functionality.

5. Aliasing and Command Efficiency [00:40:06 - 00:41:02]

  • Learners are taught to create a shell alias: alias k='kubectl' to reduce typing and avoid errors.
  • Example: k get pods instead of kubectl get pods.
  • Alias is demonstrated as a best practice for daily Kubernetes operations.

6. Namespace Fundamentals and Management [00:45:11 - 00:53:28]

  • Kubernetes default namespaces (default, kube-system, kube-public) are introduced.
  • Learners create custom namespaces using:
    • Imperative: kubectl create namespace test
    • Declarative: YAML file with apiVersion: v1, kind: Namespace, metadata.name: test, applied via kubectl apply -f
  • Namespace isolation is explained: resources in one namespace are invisible to others.
  • Commands for viewing namespaces: kubectl get namespaces, kubectl describe namespace <name>.
  • Pod creation defaults to default namespace unless explicitly specified with --namespace.

7. Resource Quotas and Limit Enforcement [00:53:30 - 01:23:10]

  • Resource quotas are introduced to limit resource consumption per namespace (CPU, memory, pod count).
  • A ResourceQuota object is created via YAML or CLI to enforce limits (e.g., 2 pods, 400Mi memory).
  • Example: Attempting to create a third pod exceeds quota → error message returned.
  • Quotas are applied to specific namespaces using --namespace flag.
  • Use case: Preventing one team from consuming all cluster resources in multi-tenant environments.
  • Learners test quota enforcement by creating pods within and beyond limits.

8. Pod Structure, Multi-Container Design, and YAML Inspection [01:23:15 - 01:44:30]

  • Pods are defined as the smallest deployable unit, containing one or more containers sharing network and storage.
  • Multi-container pod use cases:
    • Sidecar: Log collector or proxy (e.g., nginx + logspout).
    • Adapter: Data transformation (e.g., image downloader + web server).
  • Instructor warns against mixing application and database in a single pod (security and separation of concerns).
  • Learners inspect pod YAML using kubectl get pod <name> -o yaml to understand structure: apiVersion, kind, metadata, spec.
  • Pod ID extraction using JSONPath: kubectl get pod <name> -o jsonpath='{.metadata.uid}'.
  • Lab: Create pods with custom names and images (e.g., nginx, httpd), then retrieve and modify YAML output.

9. Cleanup, Break, and Next Steps [01:44:30 - 01:52:20]

  • Learners delete test pods and namespaces: kubectl delete pod --all --namespace <ns>, kubectl delete namespace <name>.
  • Instructor confirms cluster health: all master components running, worker nodes joined.
  • Break is announced (1:30 PM) to allow lunch and practice.
  • Final reminder: Use aliases, inspect YAML, avoid mixing services in pods, and always clean up test resources.

Appendix

Key Principles

  • Isolation: Use namespaces to logically separate teams, projects, or environments.
  • Least Privilege: Never run database and application in the same pod; separate concerns.
  • Idempotency: Use kubeadm reset --force to recover from misconfigurations.
  • Automation: Scripts reduce human error in node setup; use chmod 700 and ./script.sh for execution.
  • Efficiency: Create alias k='kubectl' to reduce typing and syntax errors.

Tools Used

  • hostnamectl – Set system hostname
  • kubeadm – Cluster bootstrapping tool
  • kubelet – Node agent
  • containerd – Container runtime
  • kubectl – CLI for cluster management
  • vi / nano – Edit scripts and YAML files
  • chmod, ls -l – File permission management
  • kubectl get, describe, apply, delete – Core resource operations

Common Pitfalls

  • Incorrect hostname syntax: hostnamectl set-hostname worker1 (not set -hostname or sethostname).
  • Running kubeadm join on master: Causes cluster corruption.
  • Not disabling swap: kubeadm init fails if swap is enabled.
  • Missing CNI plugin: Nodes remain NotReady even after kubeadm init.
  • Quota misconfiguration: Forgetting --namespace when applying ResourceQuota.
  • YAML syntax errors: Using : instead of = in image: nginx (correct), or image=nginx (incorrect).

Practice Suggestions

  • Rebuild the cluster from scratch using a VM (Ubuntu 20.04+).
  • Create 3 custom namespaces: dev, staging, prod, each with a ResourceQuota.
  • Write a YAML file to create a multi-container pod: one nginx, one busybox for logging.
  • Use kubectl explain pod to explore field definitions.
  • Practice aliasing and test with k get nodes, k get pods -A.
  • Simulate a quota violation and interpret the error message.