10 videos 📅 2025-06-26 09:00:00 America/New_York
2:14:39
2025-06-26 09:07:32
1:12:32
2025-06-26 09:11:34
6:42
2025-06-26 11:08:41
35:51
2025-06-26 11:24:37
38:41
2025-06-26 13:21:35
20:37
2025-06-26 15:06:35
51:46
2025-06-27 09:06:19
58:45
2025-06-27 09:06:25
36:01
2025-06-27 11:26:09
1:12:38
2025-06-27 13:45:09

Course recordings on DaDesktop for Training platform

Visit NobleProg websites for related course

Visit outline: Kubernetes Comprehensive (Course code: kubernetescompr)

Categories: Kubernetes

Summary

Overview

This course module covers Kubernetes job and cronjob automation for scheduled and one-off tasks, followed by an in-depth exploration of Kubernetes Services and DNS. It demonstrates how to create, manage, and auto-delete Jobs for single-run tasks and CronJobs for recurring operations like backups or SSD garbage collection. The second half focuses on exposing applications via Services—ClusterIP, NodePort, and LoadBalancer—with hands-on configuration, troubleshooting, and deployment using Minikube and MetalLB. The session concludes with DNS verification using a DNS utils pod, highlighting internal and external resolution, and the critical importance of robust DNS infrastructure in Kubernetes environments.

Topic (Timeline)

1. Jobs and CronJobs: Concepts and Implementation [00:00:01 - 00:08:25]

  • Jobs are used for one-off tasks; completed pods remain visible until manually deleted, with event logs typically purged after one hour.
  • Best practice: Auto-delete completed jobs to prevent pod clutter.
  • Example use case: Enabling metrics collection in databases that lack native support, using a job to ship metrics via Prometheus/Grafana, then auto-deleting after 30 minutes.
  • CronJobs handle recurring tasks (e.g., database backups, log shipping, SSD trim/garbage collection) using standard cron syntax (e.g., */1 * * * * for every minute).
  • CronJobs create a new pod per scheduled run and automatically retain a configurable number of successful/failed job histories (default: 3).
  • Demonstration: Created a Job with manual cleanup, then a Job with activeDeadlineSeconds and ttlSecondsAfterFinished for auto-deletion.
  • Demonstrated CronJob behavior: Observed creation of multiple pods over time, with automatic garbage collection to maintain a limit of 3 active pods.
  • Real-world application: CronJobs used to automate encrypted SSD trim and snapshot cleanup via CSI drivers to prevent storage exhaustion.

2. Services: Types, Configuration, and Troubleshooting [00:09:24 - 00:29:38]

  • Services provide stable network endpoints to access dynamic pods, abstracting changing pod IPs via label selectors.
  • Service spec includes: selector (to match pod labels), ports (name, protocol, port, targetPort), and type.
  • Three service types:
    • ClusterIP: Default; internal-only IP. Use for pod-to-pod communication. Set clusterIP: None for headless service (DNS-based direct pod access).
    • NodePort: Exposes service on a static port (default range: 30000–32767) on each node’s IP. Enables external access without cloud load balancer.
    • LoadBalancer: Provisions external IP via cloud provider or bare-metal solution (e.g., MetalLB, KubeVip). Requires external LB setup on bare metal.
  • Hands-on demo: Deployed Nginx, created ClusterIP service (inaccessible externally), then NodePort service (accessible via node IP + port).
  • Troubleshooting: Identified misconfigured pod labels (app.kubernetes.io/name mismatch) causing service endpoints to be empty. Fixed by correcting deployment label to match service selector.
  • Demonstrated LoadBalancer on Minikube using minikube tunnel to simulate external IP assignment via Docker network tunneling.
  • Verified connectivity: Accessed Nginx via tunnel-assigned IP in browser.

3. LoadBalancer with MetalLB and DNS Fundamentals [00:29:40 - 00:37:50]

  • Replaced minikube tunnel with MetalLB for realistic bare-metal LoadBalancer behavior.
  • Configured MetalLB via ConfigMap with IP range 192.168.49.50-192.168.49.60 (same subnet as Minikube node).
  • Verified LoadBalancer service obtained external IP from MetalLB pool without tunneling.
  • Confirmed service connectivity via browser using MetalLB-assigned IP and service port.
  • Used kubectl get pods -o wide and kubectl get deployment to validate pod labeling and service-to-pod binding.
  • Introduced headless services: Used for stateful applications (e.g., databases) where direct pod DNS resolution is needed (e.g., pod-name.service-name.namespace.svc.cluster.local).
  • Deployed dnsutils pod to test internal DNS resolution:
    • nslookup kubernetes.default.svc.cluster.local returned ClusterIP of Kubernetes service (10.96.0.1).
    • Confirmed internal DNS functionality.
  • Tested external DNS resolution (e.g., nslookup google.com) to validate external connectivity.
  • Critical insight: Kubernetes is highly DNS-sensitive; weak or slow DNS resolvers in the underlying network will cause widespread failures, unlike regular browsing which masks delays via retries.

Appendix

Key Principles

  • Jobs are for one-time tasks; CronJobs for recurring ones. Always configure ttlSecondsAfterFinished to auto-clean.
  • Service selectors must exactly match pod labels. Mismatches cause “no endpoints” and are a top cause of service failures.
  • ClusterIP = internal, NodePort = node-level external, LoadBalancer = external IP (cloud or MetalLB).
  • Headless services (clusterIP: None) bypass ClusterIP and use DNS to resolve individual pod IPs—ideal for stateful apps.
  • MetalLB is essential for realistic LoadBalancer behavior on bare metal; minikube tunnel is a development-only simulation.

Tools Used

  • kubectl get jobs, get cronjobs, get pods, get services
  • kubectl describe for status and events
  • kubectl logs for job output
  • minikube start, minikube tunnel, minikube addons enable metallb
  • nslookup from dnsutils pod for DNS validation
  • YAML editing for Job, CronJob, Service, and ConfigMap definitions

Common Pitfalls

  • Misconfigured pod labels causing service endpoints to be empty.
  • Forgetting to set ttlSecondsAfterFinished on Jobs, leading to pod bloat.
  • Assuming minikube tunnel behaves like a real cloud LoadBalancer—this is not production-representative.
  • Weak internal DNS infrastructure causing Kubernetes instability (e.g., pod DNS resolution failures, controller timeouts).

Practice Suggestions

  • Create a Job that runs a script to write a timestamp to a file, then auto-deletes after 1 minute.
  • Configure a CronJob to run every 5 minutes that logs a message to stdout.
  • Deploy a headless service for a hypothetical stateful app (e.g., Redis) and use nslookup to resolve individual pod IPs.
  • Set up MetalLB on Minikube and expose a service as LoadBalancer without tunneling.
  • Intentionally break a service by changing a pod label, then fix it using kubectl edit.