Summary
Overview
This course module covers advanced Kubernetes resource isolation and workload management techniques, focusing on namespaces, taints/tolerations, deployments, stateful sets, and daemon sets. The session provides hands-on demonstrations of creating, labeling, and deleting namespaces; applying taints to nodes and tolerations to pods for scheduling control; and deploying and managing stateless (deployments), stateful (stateful sets), and node-level (daemon sets) workloads. Emphasis is placed on practical use cases, common pitfalls, and production-grade best practices for resource separation and cluster stability.
Topic (Timeline)
1. Namespace Fundamentals and Practical Usage [00:00:00 - 00:06:14]
- Introduced namespaces as a mechanism to isolate resources within a single cluster, ensuring resource names are unique per namespace.
- Emphasized best practices: avoid using
defaultorkube-systemnamespaces for application workloads; create dedicated namespaces for each application component (e.g., separate namespaces for Minio operator and its two tenants). - Explained namespace constraints: ability to define default and maximum CPU/memory limits to enforce resource governance, especially useful for Helm charts lacking resource specifications.
- Demonstrated labeling namespaces for integration with features like the Gateway API to enable cross-namespace service access.
- Showed practical commands:
kubectl get namespaces,kubectl describe namespace,kubectl create namespace,kubectl label namespace, andkubectl delete namespace. - Highlighted that
kubectl delete namespaceis irreversible and removes all associated resources (pods, services, configs), making it ideal for development/testing to quickly reset faulty deployments.
2. Taints and Tolerations for Node Scheduling Control [00:06:16 - 00:14:34]
- Defined taints as node-level restrictions that repel pods, and tolerations as pod-level allowances to schedule on tainted nodes.
- Demonstrated tainting the control plane node with
pod=true:NoScheduleto prevent general workloads from running there. - Showed that a pod with a node selector targeting the control plane still fails to schedule without a matching toleration.
- Corrected a YAML formatting error: tolerations require proper indentation with a hyphen (
-) undertolerations, followed bykey,value,effect, andoperatorfields. - Verified successful scheduling by matching the pod’s toleration (
pod=true:NoSchedule) to the node’s taint. - Discussed production use cases: pinning stateless workloads to worker nodes, preventing resource contention on control plane nodes, and isolating storage or GPU nodes via taints.
3. Deployments, Replica Sets, and Pod Management [00:14:34 - 00:22:38]
- Introduced deployments as the recommended way to manage stateless applications via replica sets.
- Demonstrated creating an Nginx deployment with a replica set managing three pods.
- Explained that deployments use unique hash suffixes in pod names (e.g.,
nginx-deployment-6cf7b8d9c9) to track individual pods under the replica set. - Showed scaling operations:
kubectl scale deploymentfrom 3 to 4 pods (added one new pod with new hash), then back to 3 (deleted most recently created pod). - Highlighted that deleting a deployment removes the replica set and all associated pods.
- Noted that Kubernetes errors (e.g., “connection refused”) often indicate missing API server, not YAML syntax issues, and emphasized learning from such errors.
4. Stateful Sets and Persistent Identity [00:22:40 - 00:27:56]
- Contrasted stateful sets with deployments: stateful sets assign stable, sequential names (e.g.,
statefulset-0,statefulset-1) that persist across restarts. - Explained use cases: databases requiring consistent network identity and stable storage binding.
- Demonstrated creating a stateful set with two pods, then scaling down to one — the higher-numbered pod (
statefulset-1) was deleted, preservingstatefulset-0. - Noted that deleting a stateful set without scaling to zero first may leave orphaned pods; best practice is to scale to zero before deletion.
- Mentioned that stateful sets often use headless services for DNS-based stable endpoints, but noted exceptions where standard services are required due to upstream container issues.
5. Daemon Sets for Node-Level Workloads [00:27:58 - 00:35:10]
- Introduced daemon sets to ensure one pod runs on every node (or selected nodes), used for cluster-wide services like logging, monitoring, or CNI agents (e.g., Cilium).
- Demonstrated creating a multi-node Minikube cluster to show daemon set behavior across nodes.
- Deployed FluentD as a daemon set, but encountered an error due to a missing “D” in the image name (
fluent→fluentd). - Corrected the YAML, reapplied, and verified three pods were created (one per node), each with unique hashes.
- Explained that daemon sets automatically provision pods on newly added nodes.
- Clarified that FluentD is not Kubernetes-native; modern alternatives include Loki and Promtail.
- Concluded by noting image pull delays on multi-node clusters due to per-node container downloads.
6. Lesson Summary and Transition to Next Topic [00:35:10 - 00:35:10]
- Recap: namespaces for isolation, taints/tolerations for scheduling control, deployments for stateless apps, stateful sets for persistent identity, daemon sets for node-level services.
- Acknowledged practical challenges: YAML formatting errors, API server connectivity issues, and multi-node image pull delays.
- Transitioned to next lesson (not covered in transcript).
Appendix
Key Principles
- Namespaces: Use dedicated namespaces for applications; never rely on
defaultorkube-system. - Taints/Tolerations: Use taints to protect critical nodes (e.g., control plane); use tolerations selectively to allow only approved workloads.
- Deployments: Always use deployments for stateless apps; they manage replica sets and enable rolling updates.
- Stateful Sets: Use for stateful applications requiring stable network identity and ordered scaling; always scale to zero before deletion.
- Daemon Sets: Use for node-level services (logging, monitoring, CNI); they auto-scale with cluster size.
Tools Used
kubectl get namespaces,describe,create,label,deletekubectl taint nodes,kubectl get pods -o widekubectl scale deployment,kubectl scale statefulsetkubectl create -f,kubectl delete -f- Minikube (multi-node cluster for daemon set demo)
Common Pitfalls
- Incorrect YAML indentation in tolerations (missing hyphen under
tolerations). - Forgetting to create a namespace before applying a pod/deployment.
- Using
defaultnamespace for production workloads. - Deleting a stateful set without scaling to zero first → orphaned pods.
- Typos in container image names (e.g.,
fluentvsfluentd). - Misinterpreting “connection refused” as YAML error — often means API server is down.
Practice Suggestions
- Recreate all demos in a clean Minikube cluster.
- Practice tainting control plane and deploying pods with/without tolerations.
- Scale deployments and stateful sets up/down; observe pod naming and deletion order.
- Create a daemon set on a multi-node cluster and verify one pod per node.
- Intentionally break YAML files to learn how to read
kubectl describeoutput for debugging.