Course recordings on DaDesktop for Training platform
Visit NobleProg websites for related course
Visit outline: Kubernetes from Basic to Advanced (Course code: kubernetes)
Categories: Docker · Kubernetes
Summary
Overview
This course provides a comprehensive hands-on guide to Kubernetes logging, monitoring, and observability. It covers core concepts including metrics collection (CPU, memory, network), log persistence challenges, Kubernetes events, and the deployment of full-stack observability tools: Prometheus, Grafana, Alertmanager, Loki, and Promtail. The session walks through installing and configuring the Kubernetes Metric Server, MongoDB with admin UI, and the Kube Prometheus Stack via Helm, emphasizing practical setup, troubleshooting common failures (e.g., crash loops, misconfigured values), and securing credentials. The goal is to equip learners with the skills to deploy, monitor, and troubleshoot applications and cluster components in Kubernetes using industry-standard observability tooling.
Topic (Timeline)
1. Kubernetes Logging and Monitoring Fundamentals [00:00:04 - 00:03:05]
- Kubernetes monitoring collects core metrics: CPU, RAM, network I/O, storage, pod/node status, and application-specific metrics (e.g., DB connections, latency).
- Metrics endpoints can be enabled via sidecar containers if native support is unavailable or paid.
- Logs are non-persistent and deleted upon pod deletion; prior logs can be retrieved with
kubectl logs -ponly if the pod wasn’t recreated. - Crash loop back-off results in permanent log loss unless externalized.
- Kubernetes events persist for 1 hour and are accessible via
kubectl describe pod. - The Kubernetes Metric Server enables real-time resource usage via
kubectl top pods/nodesand is required for HPA. - Prometheus extends beyond the Metric Server with advanced querying, alerting, and long-term storage.
- Core cluster components to monitor: kube-apiserver, coreDNS, etcd, kubelet, kube-proxy.
- Binding components to
0.0.0.0enables monitoring but is insecure; use only in development.
2. GELK Stack and Kube Prometheus Stack Overview [00:03:06 - 00:04:23]
- GELK (Grafana, Elasticsearch, Logstash, Kibana) is referenced as a full-stack solution, but the course focuses on Loki + Promtail + Grafana for logging and Prometheus for metrics.
- Kube Prometheus Stack (via Helm) provides an integrated, production-ready stack for metrics, alerting, and dashboards without managing multiple charts.
- Grafana repo Helm charts for Loki and Promtail enable logging/event collection, often paired with MinIO for storage.
- Grafana dashboards can be imported via ID/revision or ConfigMap with JSON; UI-imported dashboards are non-persistent.
- Prometheus is the default metrics data source; Loki is the default logging data source.
- Available dashboards: Node Exporter, CoreDNS, Full Kubernetes, MongoDB, Redis, Longhorn, and Events.
3. Database Admin UIs and Helm Deployment [00:04:23 - 00:06:28]
- Database admin UIs (MongoDB, PGAdmin, MinIO Tenant, Longhorn Admin) are deployable via Helm for development environments.
- These require additional security hardening for production.
- Lab objectives: Install Metric Server, MongoDB with admin UI, Grafana, Prometheus, Alertmanager, sidecar, ServiceMonitor, Loki, Promtail, and MinIO.
4. Lab: Setting Up Minikube and Metric Server [00:06:28 - 00:08:25]
- Start fresh Minikube cluster; sometimes requires two restart cycles due to startup issues.
- Use
kubectl top pods/nodesto verify Metric Server is working (failure indicates missing installation). - Create 3-node Minikube cluster with 2 CPUs for scalability testing.
- Confirm Metric Server is functional via
kubectl get nodes(no error = success).
5. Lab: Storage Class and MongoDB Installation [00:08:25 - 00:11:58]
- Use Local Path Storage Provisioner (Rancher) for persistent volumes in Minikube despite warning about unapproved add-on.
- Create namespace
mongodband usevalues.yamlto configure Helm chart. - Pin MongoDB operator to version
1.2.1.2to avoid version mismatch issues (v8 vs v7). - Install MongoDB operator and PSMDB database with Helm, ensuring
deletePVCandbackupare disabled. - Verify pods, PVCs, and PVs are created and running.
6. Lab: Accessing MongoDB via CLI and Log Behavior [00:11:58 - 00:14:25]
- Use
kubectl runwithpercona-server-mongodb:8image to open interactive shell. - Command:
kubectl run -it --rm --tty mongodb-client --image=percona-server-mongodb:8 --restart=Never -- bash -c "mongo --host mongodb-0.mongodb-headless.mongodb.svc.cluster.local -u admin -p $(echo *** | base64 -d)" - Execute
show dbsto list databases (config, local, admin). - Note: MongoDB 8 reduces verbose logging (e.g., WiredTiger logs) to prevent log flooding.
7. Lab: Deploying MongoDB Express UI and Troubleshooting [00:14:25 - 00:21:58]
- Install MongoDB Express via Cowboy Sysop Helm repo (community-maintained chart).
- Add repo:
helm repo add cowboy-sysop https://cowboy-sysop.github.io/helm-charts/ - Generate
values.yamlwithhelm show values cowboy-sysop/mongo-express. - Modify
service.typefromClusterIPtoNodePort. - Set
extraEnvVarsfor MongoDB connection URL and credentials. - Install with Helm; pod fails with
0/1 containers runningdue to missing database user or connection URL. - Root cause: No user created in MongoDB for MongoDB Express to authenticate with.
8. Lab: Automating MongoDB User Creation with Kubernetes Job [00:21:58 - 00:27:08]
- Create secret
mdb-exp-user-secret.yamlwith plain-text username/password (mdbexpressadmin/mdbexpressadmin123456) and roles (dbAdminAnyDatabase, userAdmin, clusterMonitor, readWrite). - Apply secret:
kubectl apply -f mdb-exp-user-secret.yaml(note: plain text is for demo only; in production, usedata+ base64 encoding). - Create job
mdbexpuserjob.yamlto run MongoDB user creation script and auto-delete after completion. - Verify job status:
kubectl get jobs -n mongodb→ “Completed”. - Update
mongo-expressvalues.yaml withextraEnvVars:extraEnvVars: - name: ME_CONFIG_MONGODB_SERVER value: mongodb-0.mongodb-headless.mongodb.svc.cluster.local - name: ME_CONFIG_MONGODB_PORT value: "27017" - name: ME_CONFIG_MONGODB_ADMINUSERNAME value: mdbexpressadmin - name: ME_CONFIG_MONGODB_ADMINPASSWORD value: mdbexpressadmin123456
9. Lab: Accessing MongoDB Express UI [00:27:08 - 00:32:32]
- Reinstall MongoDB Express Helm chart with updated values.
- Verify pod status:
kubectl get pods -n mongodb. - Get NodePort:
kubectl get svc -n mongodb mongodb-express. - Access UI via
http://<minikube-ip>:<nodeport>/mongodb-admin. - Login with username:
server, password:password(hardcoded for demo). - Note: Exposed credentials are for learning; production requires secrets and ingress TLS.
10. Lab: Deploying Kube Prometheus Stack [00:32:33 - 00:43:53]
- Add Helm repo:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts - Create namespace:
kubectl create namespace monitoring - Generate full
values.yamlwithhelm show values prometheus-community/kube-prometheus-stack - Install with
helm install monitoring prometheus-community/kube-prometheus-stack -n monitoring -f values.yaml - Common errors: syntax (e.g.,
>instead of-, extra spaces, missingeinalertmanagerEnabled: true) - After install, extract simplified
values.yamlusinghelm get values monitoring -n monitoring > kube-prometheus-stack-2-values.yaml - Modify
kube-prometheus-stack-2-values.yamlto enableNodePortfor Grafana, Alertmanager, and Prometheus services. - Uninstall first, then reinstall with:
helm install monitoring prometheus-community/kube-prometheus-stack -n monitoring -f kube-prometheus-stack-2-values.yaml
11. Lab: Accessing Grafana and Monitoring Stack [00:43:53 - 00:47:36]
- Get NodePorts:
kubectl get svc -n monitoring - Identify Grafana NodePort (e.g., 30000+).
- Access Grafana UI:
http://<minikube-ip>:<grafana-nodeport> - Default login: username
admin, password retrieved from secret:kubectl get secret -n monitoring monitoring-kube-prometheus-stack-grafana -o jsonpath="{.data.admin-password}" | base64 --decode - Verify all components (Prometheus, Alertmanager, Grafana) are running and accessible.
- Note: All services should show
NodePorttype, notClusterIP, for external access.
Appendix
Key Principles
- Log Persistence: Kubernetes logs are ephemeral; externalize with Loki/Promtail or similar.
- Metrics Collection: Use Metric Server for basic
topcommands; Prometheus for advanced monitoring and alerting. - Sidecars: Required to export metrics from containers that don’t natively expose endpoints.
- Helm Best Practices: Pin versions, use
values.yamlfor overrides, avoid monolithic charts in GitOps. - Security: Never use plain-text secrets in production; use
data+ base64 or external secret managers.
Tools Used
- Metrics: Kubernetes Metric Server, Prometheus
- Logging: Loki, Promtail
- Visualization: Grafana
- Alerting: Alertmanager
- Storage: MinIO, Local Path Provisioner
- Databases: MongoDB (with operator), Percona client
- UIs: MongoDB Express, Grafana, MinIO Tenant
- Cluster: Minikube (single and multi-node)
Common Pitfalls
- Metric Server fails to start on first Minikube init → restart twice or recreate profile.
- MongoDB Express fails to connect → missing user or incorrect
ME_CONFIG_MONGODB_SERVERURL. - Helm install fails due to YAML syntax → extra spaces,
>instead of-, missingeinalertmanagerEnabled. - Grafana NodePort missing →
service.typenot set toNodePortinvalues.yaml. - Secret passwords visible in plain text → use
data+ base64 encoding in production.
Practice Suggestions
- Recreate the entire stack on a fresh Minikube cluster without reference.
- Replace MongoDB with PostgreSQL and deploy PGAdmin using similar patterns.
- Create a custom Grafana dashboard for a sample app’s custom metrics.
- Use
kubectl get events -Ato monitor cluster-wide events during deployments. - Automate the MongoDB user creation job with a ConfigMap and init container.