Course recordings on DaDesktop for Training platform
Visit NobleProg websites for related course
Visit outline: Certified Kubernetes Administrator (CKA) - exam preparation (Course code: cka)
Categories: Docker · Kubernetes · Network Administration
Summary
Overview
This course module provides an in-depth, hands-on exploration of Kubernetes Pod configuration and management. It covers the transition from imperative (kubectl run) to declarative (YAML) Pod definitions, the implementation of health checks (liveness and readiness probes), resource management (requests and limits), volume mounting for data persistence, and multi-container patterns (init containers and sidecars). The session emphasizes production-grade best practices, including resource right-sizing, automated scaling (VPA), and secure configuration patterns, preparing learners for real-world Kubernetes deployment scenarios and certification exams (e.g., CKA).
Topic (Timeline)
1. Pod Creation: Imperative vs Declarative Approaches [00:00:00 - 00:08:15]
- Introduced the rationale for using a unified view of participant environments for troubleshooting.
- Demonstrated the use of
kubectl runto create Pods imperatively, explaining that this internally generates a YAML specification. - Explained that declarative YAML files offer full control over Pod configuration beyond CLI flags (e.g., multiple containers, advanced labels, annotations).
- Showed how to generate a base YAML template using
kubectl run --dry-run=client -o yamlto avoid writing from scratch. - Detailed YAML structure:
apiVersion,kind: Pod,metadata(name, labels), andspec(containers, image, ports). - Emphasized that YAML files are the standard for production deployments due to reproducibility and version control.
2. Applying and Managing Pods via YAML [00:08:15 - 00:10:01]
- Demonstrated applying a Pod using
kubectl create -f <file.yaml>for new resources andkubectl apply -f <file.yaml>for updates. - Clarified that
applycan create if the resource doesn’t exist, but some fields (e.g.,image,ports) require deletion and recreation to update. - Highlighted the importance of using
applyfor configuration management workflows. - Noted that Pod names in YAML must be unique within a namespace.
3. Health Probes: Liveness and Readiness [00:12:06 - 00:31:07]
- Introduced the problem of container status being “Running” while the application is unresponsive (e.g., deadlock, slow response).
- Defined liveness probe: Detects application unresponsiveness and triggers Pod restart. Configured via HTTP, TCP, or exec probes.
- HTTP probe: Checks a health endpoint (e.g.,
/health) with configurableperiodSeconds,failureThreshold,timeoutSeconds, andinitialDelaySeconds. - Exec probe: Runs a command (e.g.,
cat /tmp/healthy) — success (exit 0) = healthy. - TCP probe: Checks if a port is open.
- HTTP probe: Checks a health endpoint (e.g.,
- Defined readiness probe: Determines if a Pod is ready to receive traffic. Failure excludes Pod from Service load balancing; does not restart the Pod.
- Contrasted liveness (restart on failure) vs readiness (remove from service endpoints on failure).
- Demonstrated real-time probe behavior using
kubectl port-forwardand simulating 500 errors to trigger restarts. - Showed probe status in
kubectl describe podand events.
4. Resource Management: Requests and Limits [00:32:08 - 00:39:12]
- Explained the need to prevent resource starvation: a misbehaving Pod consuming excessive CPU/memory.
- Defined resource requests: Minimum resources required for scheduling. Scheduler ensures the node has sufficient capacity.
- Defined resource limits: Maximum resources a Pod can consume. Exceeding limits may trigger termination (OOMKilled or CPU throttling).
- Emphasized that developers must determine values via performance/load testing.
- Introduced Vertical Pod Autoscaler (VPA): Automatically recommends or applies optimal
requestsandlimitsbased on historical usage, reducing manual sizing effort. - Noted that clusters can enforce resource requirements via admission controllers to reject Pods without them.
5. Volumes: Data Persistence Beyond Container Lifecycle [00:45:18 - 00:54:04]
- Highlighted the ephemeral nature of container filesystems: deleting a Pod loses all data.
- Introduced volumes to persist data outside the container lifecycle (e.g., databases, logs).
- Explained two-step process: (1) Define volume with name and type (e.g.,
nfs,hostPath,gcePersistentDisk), (2) Mount volume to a container path viavolumeMounts. - Demonstrated mounting an NFS share to
/datain a PostgreSQL container. - Showed
hostPathas an alternative for local node storage. - Noted this is a basic volume type; persistent volumes (PV) and claims (PVC) will be covered later for dynamic provisioning.
6. Multi-Container Pod Patterns: Init Containers and Sidecars [00:55:10 - 01:04:28]
- Introduced init containers: Short-lived containers that run sequentially before the main container. Must complete successfully (exit 0) for the main container to start.
- Use case: Wait for dependencies (e.g., database, service) to be available before launching the app.
- Do not support liveness/readiness probes; lifecycle is one-time per Pod restart.
- Introduced sidecar containers: Long-running companion containers running alongside the main container in the same Pod.
- Adapter sidecar: Transforms data format (e.g., converts logs from ABC to XYZ format).
- Ambassador sidecar: Acts as a proxy for external services, handling auth, retries, circuit breakers — allowing the main app to call it via
localhost.
- Emphasized that all containers in a Pod share the same node, network namespace, and storage volumes, enabling IPC and file sharing.
7. Summary and Next Steps [01:04:28 - 01:04:28]
- Concluded the Pod topic with a review of key configurations: YAML structure, health probes, resource limits/requests, volumes, and multi-container patterns.
- Advised learners to practice with a comprehensive YAML file integrating all concepts.
- Announced transition to the next topic: ReplicaSets and Deployments.
Appendix
Key Principles
- Declarative over Imperative: Always use YAML for production deployments.
- Health Probes are Critical: Liveness ensures recovery from crashes; readiness ensures traffic routing only to healthy Pods.
- Resource Sizing: Define
requestsfor scheduling andlimitsfor isolation. Use VPA to automate sizing. - Data Persistence: Use volumes to retain data across Pod restarts or deletions.
- Multi-Container Patterns: Use init containers for setup; sidecars for auxiliary tasks (logging, proxying) without polluting the main app.
Tools and Commands
kubectl run --dry-run=client -o yaml→ Generate YAML templatekubectl apply -f <file.yaml>→ Apply configurationkubectl get pods→ List Podskubectl describe pod <name>→ View events, probes, resource usagekubectl port-forward <pod> <local>:<pod-port>→ Access Pod service locally
Common Pitfalls
- Modifying files inside a running container (not persistent; Kubernetes may replace the Pod).
- Omitting resource requests/limits → risk of node resource exhaustion.
- Using
hostPathin production without considering node affinity and data portability. - Confusing liveness and readiness probes — liveness restarts, readiness excludes from load balancing.
Practice Suggestions
- Create a Pod YAML with:
- Two containers: main (nginx) and sidecar (log forwarder)
- Liveness probe on
/health - Readiness probe on
/ready - Resource requests: 100m CPU, 128Mi memory; limits: 200m CPU, 256Mi memory
- Volume mount from
hostPathto persist logs - One init container that waits 10 seconds before exiting
- Use
kubectl applyand verify withdescribeandget events. - Simulate probe failures using
curlto trigger restarts or unready states.