4 videos 📅 2022-11-21 09:00:00 Africa/Blantyre
1:04:49
2022-11-21 11:28:21
44:16
2022-11-21 13:36:53
44:08
2022-11-21 14:41:53
10:27
2022-11-21 15:56:25

Course recordings on DaDesktop for Training platform

Visit NobleProg websites for related course

Summary

Overview

This course session provides an in-depth practical exploration of Kubernetes resource hierarchy, focusing on Pods, ReplicaSets, and Deployments. It explains how these top-level resources interact through labels and label selectors to maintain desired state, enable scaling, and support zero-downtime deployments. The trainer demonstrates imperative and declarative management of resources, deletion behaviors with/without cascading, and the critical role of labels in resource orchestration. The session concludes with guidance on applying these concepts to real-world microservice deployments and best practices for label naming conventions.

Topic (Timeline)

1. Troubleshooting Volume Mounts and Minikube Environment [00:00:16.680 - 00:02:39.360]

  • Identifies an invalid NFS volume mount error due to non-existent NFS server; recommends switching to hostPath volume for local development.
  • Clarifies that the “host” in Minikube context refers to the Minikube VM, not the local Ubuntu machine; instructs user to use minikube ssh to access the correct host.
  • Confirms successful SSH access to Minikube and provides logout command to exit.
  • Notifies learners that 4 minutes remain to complete the current activity.

2. Port Forwarding for Custom Applications [00:02:54.360 - 00:04:07.400]

  • Explains that custom applications (e.g., a card app) expose services on port 8080, unlike Nginx which uses port 80.
  • Emphasizes that in portForward configuration, the left side is the host port (any available port, e.g., 8080, 8081), and the right side is the container port where the app listens (e.g., 8080:8080).
  • Reinforces that container port must match the application’s exposed port for connectivity.

3. Deleting Pods and Transitioning to Deployments [00:04:10.920 - 00:06:29.480]

  • Demonstrates kubectl get pods to verify deletion of previously created standalone Pods (e.g., card, nginx).
  • Introduces Deployment as a top-level Kubernetes resource that manages Pods through ReplicaSets.
  • Shows creation of a Deployment using kubectl create deployment my-deploy --image=nginx --replicas=5 and generating a YAML manifest via --dry-run=client -o yaml.
  • Highlights the structure: Deployment spec contains a template spec, which defines the Pod specification (containers, probes, volumes, etc.).

4. Deployments, ReplicaSets, and Desired State Management [00:06:31.960 - 00:12:24.420]

  • Applies the Deployment YAML and observes creation of 5 Pods with names prefixed by the Deployment name (my-deploy-...).
  • Deletes one Pod manually and observes immediate replacement by the ReplicaSet, demonstrating the “desired state” reconciliation loop.
  • Explains the hierarchy: Deployment → ReplicaSet → Pods.
  • Clarifies that ReplicaSet’s sole purpose is to ensure the number of running Pods matches the desired count (CS = DS).
  • Emphasizes that deleting a Pod triggers automatic replacement; standalone Pods (created via kubectl run) do not self-recover.

5. Scaling and Hierarchy Control [00:12:25.280 - 00:17:03.160]

  • Demonstrates scaling a Deployment imperatively using kubectl scale deployment my-deploy --replicas=3 and --replicas=5.
  • Shows that scaling a ReplicaSet directly (e.g., kubectl scale rs ...) is accepted but immediately overridden by the parent Deployment’s desired state.
  • Confirms that only the top-level Deployment can control the replica count; direct manipulation of ReplicaSet is ineffective and discouraged.
  • Explains that the Deployment enforces its desired state, making it the authoritative source for scaling.

6. Deletion with and without Cascade [00:17:05.460 - 00:20:23.940]

  • Deletes a Deployment using kubectl delete deployment my-deploy, which cascades deletion to its ReplicaSet and all associated Pods.
  • Demonstrates --cascade=false to delete only the Deployment, leaving ReplicaSet and Pods running as orphans.
  • Shows that orphaned ReplicaSet can then be scaled independently since no parent controls it.
  • Deletes the orphaned ReplicaSet with --cascade=false, leaving Pods as standalone, unmanaged resources.
  • Explains that orphaned Pods can be deleted without automatic replacement.

7. Kubernetes Resource Hierarchy and Use Cases [00:20:19.180 - 00:25:23.120]

  • Compares deployment strategies:
    • Standalone Pod: Manual management; suitable for one-off or test workloads.
    • Standalone ReplicaSet: Ensures fixed number of Pods; no rollout features.
    • Deployment: Enables zero-downtime rollouts, rollbacks, and automated scaling; recommended for production microservices.
  • Advises that for microservices (M1, M2, M3), each should be deployed via its own Deployment with appropriate replica counts.
  • Notes that liveness/readiness probes, resource limits, and volumes are defined at the Pod level and inherited by higher resources.

8. Labels and Label Selectors: Core to Kubernetes Orchestration [00:25:23.740 - 00:39:41.250]

  • Explains that Kubernetes resources are linked via labels (key-value pairs), not names.
  • Demonstrates default labels: app=my-deploy (from Deployment) and pod-template-hash=... (from ReplicaSet), both applied to Pods.
  • Shows how to add/remove labels using kubectl label pod <name> env=prod and kubectl label pod <name> env-.
  • Uses kubectl get pods --show-labels to view labels.
  • Demonstrates label selector queries: kubectl get pods -l env=prod, kubectl get pods -l 'app in (my-deploy)'.
  • Explains that ReplicaSet uses label selectors internally to find and manage Pods: get pods -l app=my-deploy,pod-template-hash=....
  • Demonstrates “adopting” and “quarantining” Pods:
    • Adopting: A new ReplicaSet with matching labels adopts existing Pods.
    • Quarantining: Removing a label from a Pod causes it to be ignored by the ReplicaSet; a new Pod is created to meet desired count.
    • Reapplying the label causes the orphaned Pod to be re-adopted and one Pod to be deleted to maintain desired count.

9. Controllers, API Interaction, and Best Practices [00:39:41.250 - 00:42:36.650]

  • Introduces kube-controller-manager as the binary housing all controllers (ReplicaSet, Deployment, etc.) that reconcile state.
  • Clarifies that kubelet executes actions on nodes, while scheduler assigns Pods to nodes.
  • Warns that shared namespaces with overlapping labels can cause unintended adoption; recommends using organizational labels (e.g., team=finance, app=payment-service).
  • Notes that in production, RBAC can restrict label modification to prevent accidental disruption.
  • Encourages learners to practice: creating Deployments, scaling, deleting with/without cascade, labeling, and using selectors.

Appendix

Key Principles

  • Desired State Management: Kubernetes resources (Deployments, ReplicaSets) continuously reconcile actual state with desired state.
  • Label-Based Orchestration: Resources are linked via labels, not names. Label selectors are the foundation of resource discovery and management.
  • Hierarchy Enforcement: Child resources (ReplicaSet, Pods) are controlled only by their parent. Direct manipulation of children is overridden.
  • Cascading Deletion: By default, deleting a parent deletes all children; use --cascade=false to retain children as orphans.

Tools and Commands

  • kubectl create deployment <name> --image=<image> --replicas=N
  • kubectl scale deployment <name> --replicas=N
  • kubectl delete deployment <name> --cascade=false
  • kubectl label pod <name> <key>=<value>
  • kubectl label pod <name> <key>-
  • kubectl get pods --show-labels
  • kubectl get pods -l <key>=<value>
  • kubectl get rs,deploy,pods
  • minikube ssh (to access Minikube host)

Common Pitfalls

  • Confusing “host” in Minikube as local machine instead of Minikube VM.
  • Assuming Pod names determine ownership; actual ownership is determined by labels.
  • Attempting to scale a ReplicaSet directly — changes are overridden by the Deployment.
  • Using generic labels (e.g., app=nginx) across multiple teams, leading to unintended Pod adoption.

Practice Suggestions

  • Create a Deployment with 3 replicas, then delete one Pod — observe automatic replacement.
  • Use --dry-run=client -o yaml to generate manifests before applying.
  • Label a Pod manually, then create a ReplicaSet with a matching selector — observe adoption.
  • Quarantine a Pod by removing its app label, then reapply it — observe deletion of excess Pod.
  • Try deleting a Deployment with and without --cascade=false and inspect remaining resources.