10 videos 📅 2026-01-05 11:00:00 America/New_York
1:27:43
2026-01-05 11:02:48
36:42
2026-01-05 13:23:18
24:18
2026-01-05 14:51:28
23:08
2026-01-06 11:00:44
1:25:19
2026-01-06 12:03:13
37:20
2026-01-07 10:43:36
59:05
2026-01-07 13:35:15
34:36
2026-01-08 10:59:14
48:32
2026-01-08 12:32:09
17:47
2026-01-08 15:06:46

Course recordings on DaDesktop for Training platform

Visit NobleProg websites for related course

Visit outline: Kubernetes from Basic to Advanced (Course code: kubernetes)

Categories: Docker · Kubernetes

Summary

Overview

This course provides a comprehensive, hands-on introduction to Kubernetes, starting with container architecture and progressing through cluster setup, core components, distributions, scaling, and operational best practices. The instructor, a Kubernetes expert since 2016, contextualizes Kubernetes’ evolution from its government-funded origins to modern cloud-native and Kubernetes-native paradigms. The session emphasizes practical, production-grade workflows using Minikube for local training, infrastructure as code (IaC), and declarative GitOps to avoid risky in-place upgrades. The course is structured around nine lessons, with 70% dedicated to labs, and prioritizes resource efficiency, security, and DevOps automation over bloated cloud-native tooling.

Topic (Timeline)

1. Course Introduction & Minikube Environment Setup [00:00:01 - 00:04:34]

The session begins with an overview of the course structure: nine lessons over four days, 70% hands-on labs, daily 4-hour sessions (11 a.m.–3:30 p.m.) with a 30-minute break. The instructor introduces Minikube as the primary training environment, explaining its ability to simulate multi-node, HA Kubernetes clusters on a single machine. Setup instructions are provided for accessing the terminal (Mate Terminal), verifying tool versions (minikube, kubectl, Helm 3.19.3, Docker 29, Cilium 0.18.9), and ensuring compatibility. The instructor highlights Linux as the optimal platform for Kubernetes tooling, noting Windows limitations and macOS as acceptable but suboptimal. The session ends with a clear directive to use Minikube for all training due to its production-like feature support without multi-VM overhead.

2. Kubernetes Architecture & Container Runtime Deep Dive [00:04:37 - 00:34:17]

The instructor demonstrates how to launch and verify a Minikube cluster using minikube start --driver=docker --container-runtime=containerd, then inspects the cluster with kubectl get nodes and kubectl get pods -A. The session dissects the Kubernetes node architecture, showing how kubelet interacts with the Container Runtime Interface (CRI) via gRPC to communicate with containerd through its socket (containerd.sock). The instructor contrasts Docker and containerd, explaining Kubernetes’ shift from Docker runtime to containerd for better standardization and performance. Using crictl commands (images, ps, pods), the session reveals how Kubernetes internals mirror Docker commands, demystifying the “magic” of container orchestration. The instructor emphasizes that containers are OS-level abstractions, contrasting them with VMs, and introduces OCI (Open Container Initiative) as the standard for images and runtimes, and CRI as Kubernetes’ orchestration interface.

3. Kubernetes Fundamentals: Clusters, Components, and Distributions [00:34:17 - 01:00:29]

The session transitions to core Kubernetes concepts: defining Kubernetes as an automated container orchestration system, explaining the origin of the name “K8s,” and contrasting cloud-native vs. Kubernetes-native approaches. The instructor critiques the bloat introduced by cloud tooling (2018–2021) and advocates for streamlined, resource-efficient “Kubernetes-native” clusters. The control plane components are detailed: API server (HTTP verbs, etcd interaction), kube-scheduler (pod-to-node assignment), kube-controller-manager (control loops for replication, namespaces), and kube-proxy (being replaced by CNIs like Cilium). The kubelet’s role as a node-level daemon managing pod lifecycle is reinforced. Kubernetes distributions are analyzed: enterprise (RKE2, OpenShift, Talos), development (Minikube, MicroK8s, K3s), and deprecated tools (Rancher Desktop, Docker Desktop). The instructor strongly recommends RKE2 for production due to its IaC compatibility and lack of enterprise fees, contrasting it with Red Hat’s OpenShift, which is often mandated for DoD contracts.

4. Cluster Design, High Availability, and Self-Healing [01:00:29 - 01:16:48]

The session covers cluster design principles: etcd as a high-availability key-value store requiring odd-numbered nodes (1, 3, 5), with performance advice for SSD/NVMe storage and RAID 10 striping. The instructor explains the evolution of node naming (master/slave → control plane/worker → server/agent) and the distinction between worker nodes and storage nodes (agent + storage role). Cluster scale limits (5,000 nodes max, 110 pods/node due to IP subnet constraints) are contextualized with modern IaC practices favoring 100-node clusters segmented by workload. High availability is addressed via VIPs (virtual IP) for control plane failover and pod anti-affinity to prevent resource concentration. Self-healing features are detailed: automatic pod rescheduling on node failure, container restarts via probes, service endpoint removal, replica set recovery, and persistent volume resilience. The instructor emphasizes that these features enable DevOps teams to maintain availability without deep Kubernetes expertise.

5. Operational Best Practices: Causality, Scaling, and IaC-Driven Upgrades [01:16:48 - 01:26:37]

The session tackles the “causality dilemma” — dependencies between resources (e.g., CRDs needed before Helm chart deployment) — and resolves it by sequencing Helm chart components. Node scaling is explained: worker nodes scale ±N, control plane nodes must scale in odd numbers. The instructor contrasts in-place upgrades (90% success rate, 10% risk of cluster failure) with IaC-driven cluster replacement: spinning up a new cluster with the latest tested version, mirroring state via GitOps, and switching DNS. This “blue-green” approach eliminates upgrade risks and removes the need for full-time Kubernetes engineers. The session concludes by previewing upcoming labs: adding/removing control plane and worker nodes, setting up an HA cluster with CNI, and inspecting etcd data directly — all to reinforce IaC and GitOps as the standard for production Kubernetes operations.

6. Course Wrap-up and Lab Preview [01:26:37 - 01:27:01]

The session ends with a transition to the afternoon lab session, previewing practical exercises: configuring HA clusters, inspecting etcd key-value data, and managing node lifecycle. The instructor reiterates that all code, YAML files, and slides will be made available via GitLab after the session, and encourages learners to use the provided Minikube environment for self-paced review. The full VTT range is covered, concluding with the scheduled break and lab transition.

Appendix

Key Principles

  • Kubernetes-Native over Cloud-Native: Prioritize minimal, efficient clusters over bloated cloud tooling.
  • Infrastructure as Code (IaC): Use declarative IaC (Terraform, Ansible) to provision clusters, not manual in-place upgrades.
  • Declarative GitOps: Mirror application state to new clusters via Git, not live configuration changes.
  • DevOps Empowerment: Enable non-specialists to manage clusters safely using IaC and GitOps, reducing reliance on Kubernetes engineers.

Tools Used

  • Minikube: Primary training environment for single-node and multi-node cluster simulation.
  • containerd: Default container runtime for Kubernetes (replaces Docker runtime).
  • crictl: CLI for debugging container runtime state (e.g., crictl ps, crictl images).
  • kubectl: Primary CLI for cluster interaction (e.g., get nodes, get pods -A).
  • Helm 3.19.3: Recommended version for chart management; avoid Helm 4 due to instability.
  • Cilium: CNI used in the provided RKE2 cluster, replacing kube-proxy for improved performance.
  • RKE2: Recommended production distribution; IaC-compatible and free of enterprise fees.

Common Pitfalls

  • Windows Limitations: Many Kubernetes tools (e.g., CNI, kubectl plugins) are incompatible or unstable on Windows.
  • Docker Compose in Production: Can bypass firewalls and overload network overlays; use only with WAF (e.g., NGINX OWASP) if deployed.
  • In-Place Upgrades: 10% failure rate on long-running clusters; always test upgrades in IaC-based staging environments.
  • etcd on Spinning Drives: Poor performance; use SSDs or NVMe with RAID 10 for production.
  • Helm 4: Avoid in production; not yet stable with upstream charts.

Practice Suggestions

  • Rebuild Minikube environment daily using minikube stop && minikube delete --all && minikube start --container-runtime=containerd.
  • Practice crictl commands to inspect containers and pods directly on the node via minikube ssh.
  • Use kubectl get nodes -o wide to verify kernel and container runtime versions.
  • Simulate HA by creating multiple Minikube profiles (e.g., minikube start -p ha-cluster) and deleting them after use.
  • Write a simple Helm chart with CRDs and test the causality dilemma by installing CRDs separately before the chart.