Summary

Overview

This course session provides a comprehensive, hands-on deep dive into Kubernetes networking, storage, and configuration management. It begins with foundational commands for managing pods, jobs, and cron jobs, then progresses through service types (ClusterIP, NodePort, LoadBalancer), deployment scaling, and the role of labels in dynamic service mapping. The session transitions into persistent storage concepts, covering ephemeral emptyDir, hostPath, and persistent volumes (PVs) with persistent volume claims (PVCs), including static and dynamic provisioning. It concludes with configuration management using ConfigMaps and Secrets, emphasizing the 12-Factor App methodology for clean, scalable, and environment-agnostic deployments.

Topic (Timeline)

1. Pod and Job Management with kubectl [00:00:00 - 00:04:45]

  • Demonstrated deletion of pods, jobs, and cron jobs using kubectl delete with -f flag and resource names (e.g., kubectl delete cronjob hello).
  • Clarified correct syntax: no space after delete, correct resource pluralization (e.g., cronjob, not cronjobs).
  • Emphasized use of kubectl get pods -A to verify deletion and resource state.
  • Addressed ASR errors and speaker hesitations, focusing on actionable command patterns.

2. Cluster Access and SSH Configuration [00:04:49 - 00:08:44]

  • Guided participants through SSH login to Kubernetes master node using correct IP (***.***.***.233).
  • Instructed use of sudo su - to elevate privileges on the master node.
  • Clarified that kubectl commands must be executed on the master node, not worker nodes.
  • Introduced aliasing k for kubectl to streamline command entry (alias k='kubectl').
  • Emphasized re-aliasing after SSH reconnection and verifying node access via k get nodes.

3. Kubernetes Services: ClusterIP, NodePort, and Load Balancing [00:08:47 - 00:33:02]

  • Introduced ClusterIP as a static internal service endpoint, decoupling client access from dynamic pod IPs.
  • Demonstrated mapping service to pod via label selector (role=backend).
  • Showed curl access to ClusterIP from any node, proving internal service reachability.
  • Explained NodePort: exposes service on a static port (e.g., 31009) on all cluster nodes, enabling external access via <public-IP>:<node-port>.
  • Clarified that NodePort traffic flows through the cluster’s internal service layer, not directly to a single node.
  • Discussed LoadBalancer as the next-tier option (cloud provider-dependent, e.g., AWS ELB), noting it requires EKS or kops for full implementation.
  • Emphasized that services are cluster-wide, not node-specific, enabling seamless traffic routing across nodes.

4. Storage: Ephemeral (emptyDir) and HostPath [00:33:03 - 00:58:45]

  • Demonstrated data loss with emptyDir: created a file in a pod’s ephemeral volume, deleted the pod, and confirmed data vanished on the worker node.
  • Showed hostPath persistence: mounted a host directory (e.g., /var/local/data) into a pod; file created in container persisted after pod deletion.
  • Guided participants to identify the worker node hosting a pod using k get pods -o wide, then navigate to the corresponding host path to verify file persistence.
  • Explained hostPath types: DirectoryOrCreate (creates if missing) and File (mounts existing file).
  • Highlighted that hostPath is node-specific and not suitable for multi-node sharing.

5. Storage: Persistent Volumes (PV), Persistent Volume Claims (PVC), and Storage Classes [00:58:48 - 01:27:41]

  • Introduced storage classes as abstractions for storage types (SSD, HDD, archive) and provisioning methods (static/dynamic).
  • Explained static provisioning: admin creates PVs manually; users claim via PVCs.
  • Demonstrated dynamic provisioning: PVC references a StorageClass with provisioner: kubernetes.io/no-provisioner or waitForFirstConsumer; PV and PVC are auto-created.
  • Showed creation of a 10Gi PV using hostPath backend, then a PVC claiming 2Gi (demonstrating over-provisioning limitation).
  • Mapped PVC to a pod via volumeMounts, confirmed binding with k describe pod, and verified data persistence on the host after pod deletion.
  • Clarified reclaim policies: Retain (PV persists after PVC deletion) vs. Delete (PV and underlying storage are purged).

6. 12-Factor App Principles and Configuration Management [01:27:42 - 01:34:44]

  • Introduced the 12-Factor App methodology: declarative configs, clean contracts, environment parity, and separation of config from code.
  • Defined ConfigMaps: non-sensitive key-value pairs (e.g., database URLs, feature flags) stored as Kubernetes objects.
  • Demonstrated two injection methods:
    • As environment variables: envFrom or env from ConfigMap (static on pod creation).
    • As mounted volumes: ConfigMap contents appear as files in a specified path (e.g., /etc/config/), allowing runtime updates.
  • Introduced Secrets: same structure as ConfigMaps but for sensitive data (passwords, keys), stored as base64-encoded values.
  • Emphasized: never embed credentials in pod YAML; always use ConfigMaps or Secrets for configuration.

Appendix

Key Principles

  • Service-Driven Access: Always use Services (ClusterIP, NodePort) to access pods, never direct pod IPs.
  • Label-Based Mapping: Services and Deployments use label selectors to dynamically bind to pods; scaling auto-includes new pods.
  • Storage Persistence: Use hostPath for single-node persistence; use PV/PVC for cluster-wide, managed storage.
  • Dynamic Provisioning: Prefer StorageClasses with provisioner and waitForFirstConsumer to automate PV/PVC lifecycle.
  • Configuration Separation: Use ConfigMaps for non-sensitive configs and Secrets for sensitive data; mount as volumes for runtime update support.

Tools Used

  • kubectl get pods -A, get svc, get deploy, get pv, get pvc
  • kubectl describe pod <name>, describe svc
  • kubectl delete <resource> <name>, delete -f <file>
  • kubectl apply -f <file>
  • kubectl exec -it <pod> -- /bin/sh
  • curl <service-IP>:<port>, curl <public-IP>:<node-port>
  • alias k='kubectl'
  • vi/vim for YAML editing

Common Pitfalls

  • Using kubectl on worker nodes instead of master.
  • Incorrect resource names (e.g., cronjobs instead of cronjob).
  • Forgetting to re-alias k after SSH reconnection.
  • Assuming emptyDir persists after pod deletion.
  • Using hostPath in production (not portable or multi-node).
  • Embedding secrets in pod YAML instead of using Secrets.
  • Expecting environment variables from ConfigMaps to auto-update on ConfigMap change (they don’t; use volume mounts).

Practice Suggestions

  • Rebuild the entire service-deployment-pod-pvc flow from scratch without guidance.
  • Create a multi-pod deployment with 3 replicas and verify load balancing via repeated curl to the ClusterIP.
  • Create a ConfigMap with 5 key-value pairs, mount as volume, and edit the ConfigMap to observe live updates in the pod.
  • Delete a PVC and observe the PV’s reclaim policy behavior (Retain vs. Delete).
  • Attempt to create a NodePort service and access it from a browser on a local machine using the worker node’s public IP.