Course recordings on DaDesktop for Training platform
Visit NobleProg websites for related course
Visit outline: Docker and Kubernetes: Building and Scaling a Containerized Application (Course code: dockerkubernetesscale)
Categories: Docker · Kubernetes
Summary
Overview
This course session provides a comprehensive, hands-on deep dive into Kubernetes networking, storage, and configuration management. It begins with foundational commands for managing pods, jobs, and cron jobs, then progresses through service types (ClusterIP, NodePort, LoadBalancer), deployment scaling, and the role of labels in dynamic service mapping. The session transitions into persistent storage concepts, covering ephemeral emptyDir, hostPath, and persistent volumes (PVs) with persistent volume claims (PVCs), including static and dynamic provisioning. It concludes with configuration management using ConfigMaps and Secrets, emphasizing the 12-Factor App methodology for clean, scalable, and environment-agnostic deployments.
Topic (Timeline)
1. Pod and Job Management with kubectl [00:00:00 - 00:04:45]
- Demonstrated deletion of pods, jobs, and cron jobs using
kubectl deletewith-fflag and resource names (e.g.,kubectl delete cronjob hello). - Clarified correct syntax: no space after
delete, correct resource pluralization (e.g.,cronjob, notcronjobs). - Emphasized use of
kubectl get pods -Ato verify deletion and resource state. - Addressed ASR errors and speaker hesitations, focusing on actionable command patterns.
2. Cluster Access and SSH Configuration [00:04:49 - 00:08:44]
- Guided participants through SSH login to Kubernetes master node using correct IP (
***.***.***.233). - Instructed use of
sudo su -to elevate privileges on the master node. - Clarified that
kubectlcommands must be executed on the master node, not worker nodes. - Introduced aliasing
kforkubectlto streamline command entry (alias k='kubectl'). - Emphasized re-aliasing after SSH reconnection and verifying node access via
k get nodes.
3. Kubernetes Services: ClusterIP, NodePort, and Load Balancing [00:08:47 - 00:33:02]
- Introduced ClusterIP as a static internal service endpoint, decoupling client access from dynamic pod IPs.
- Demonstrated mapping service to pod via label selector (
role=backend). - Showed
curlaccess to ClusterIP from any node, proving internal service reachability. - Explained NodePort: exposes service on a static port (e.g.,
31009) on all cluster nodes, enabling external access via<public-IP>:<node-port>. - Clarified that NodePort traffic flows through the cluster’s internal service layer, not directly to a single node.
- Discussed LoadBalancer as the next-tier option (cloud provider-dependent, e.g., AWS ELB), noting it requires EKS or kops for full implementation.
- Emphasized that services are cluster-wide, not node-specific, enabling seamless traffic routing across nodes.
4. Storage: Ephemeral (emptyDir) and HostPath [00:33:03 - 00:58:45]
- Demonstrated data loss with
emptyDir: created a file in a pod’s ephemeral volume, deleted the pod, and confirmed data vanished on the worker node. - Showed
hostPathpersistence: mounted a host directory (e.g.,/var/local/data) into a pod; file created in container persisted after pod deletion. - Guided participants to identify the worker node hosting a pod using
k get pods -o wide, then navigate to the corresponding host path to verify file persistence. - Explained
hostPathtypes:DirectoryOrCreate(creates if missing) andFile(mounts existing file). - Highlighted that
hostPathis node-specific and not suitable for multi-node sharing.
5. Storage: Persistent Volumes (PV), Persistent Volume Claims (PVC), and Storage Classes [00:58:48 - 01:27:41]
- Introduced storage classes as abstractions for storage types (SSD, HDD, archive) and provisioning methods (static/dynamic).
- Explained static provisioning: admin creates PVs manually; users claim via PVCs.
- Demonstrated dynamic provisioning: PVC references a StorageClass with
provisioner: kubernetes.io/no-provisionerorwaitForFirstConsumer; PV and PVC are auto-created. - Showed creation of a 10Gi PV using
hostPathbackend, then a PVC claiming 2Gi (demonstrating over-provisioning limitation). - Mapped PVC to a pod via
volumeMounts, confirmed binding withk describe pod, and verified data persistence on the host after pod deletion. - Clarified reclaim policies:
Retain(PV persists after PVC deletion) vs.Delete(PV and underlying storage are purged).
6. 12-Factor App Principles and Configuration Management [01:27:42 - 01:34:44]
- Introduced the 12-Factor App methodology: declarative configs, clean contracts, environment parity, and separation of config from code.
- Defined ConfigMaps: non-sensitive key-value pairs (e.g., database URLs, feature flags) stored as Kubernetes objects.
- Demonstrated two injection methods:
- As environment variables:
envFromorenvfrom ConfigMap (static on pod creation). - As mounted volumes: ConfigMap contents appear as files in a specified path (e.g.,
/etc/config/), allowing runtime updates.
- As environment variables:
- Introduced Secrets: same structure as ConfigMaps but for sensitive data (passwords, keys), stored as base64-encoded values.
- Emphasized: never embed credentials in pod YAML; always use ConfigMaps or Secrets for configuration.
Appendix
Key Principles
- Service-Driven Access: Always use Services (ClusterIP, NodePort) to access pods, never direct pod IPs.
- Label-Based Mapping: Services and Deployments use label selectors to dynamically bind to pods; scaling auto-includes new pods.
- Storage Persistence: Use
hostPathfor single-node persistence; use PV/PVC for cluster-wide, managed storage. - Dynamic Provisioning: Prefer StorageClasses with
provisionerandwaitForFirstConsumerto automate PV/PVC lifecycle. - Configuration Separation: Use ConfigMaps for non-sensitive configs and Secrets for sensitive data; mount as volumes for runtime update support.
Tools Used
kubectl get pods -A,get svc,get deploy,get pv,get pvckubectl describe pod <name>,describe svckubectl delete <resource> <name>,delete -f <file>kubectl apply -f <file>kubectl exec -it <pod> -- /bin/shcurl <service-IP>:<port>,curl <public-IP>:<node-port>alias k='kubectl'vi/vimfor YAML editing
Common Pitfalls
- Using
kubectlon worker nodes instead of master. - Incorrect resource names (e.g.,
cronjobsinstead ofcronjob). - Forgetting to re-alias
kafter SSH reconnection. - Assuming
emptyDirpersists after pod deletion. - Using
hostPathin production (not portable or multi-node). - Embedding secrets in pod YAML instead of using Secrets.
- Expecting environment variables from ConfigMaps to auto-update on ConfigMap change (they don’t; use volume mounts).
Practice Suggestions
- Rebuild the entire service-deployment-pod-pvc flow from scratch without guidance.
- Create a multi-pod deployment with 3 replicas and verify load balancing via repeated
curlto the ClusterIP. - Create a ConfigMap with 5 key-value pairs, mount as volume, and edit the ConfigMap to observe live updates in the pod.
- Delete a PVC and observe the PV’s reclaim policy behavior (Retain vs. Delete).
- Attempt to create a NodePort service and access it from a browser on a local machine using the worker node’s public IP.