9 videos 📅 2024-12-16 09:00:00 Asia/Brunei
2:25:36
2024-12-16 09:21:57
2:03:59
2024-12-16 11:06:33
1:46:52
2024-12-16 13:50:28
1:56:02
2024-12-17 09:35:43
1:52:29
2024-12-17 09:39:32
1:51:17
2024-12-17 13:36:38
1:35:13
2024-12-18 08:59:20
23:23
2024-12-18 11:50:59
1:47:49
2024-12-18 13:41:04

Course recordings on DaDesktop for Training platform

Visit NobleProg websites for related course

Summary

Overview

This course session provides a comprehensive hands-on introduction to Docker and Kubernetes, covering container creation, image management, Docker Compose for multi-container applications, networking (bridge, user-defined, host, and none), and the fundamentals of container orchestration with Kubernetes. The trainer guides learners through practical steps for building, tagging, and pushing Docker images to Docker Hub, configuring multi-service applications with Docker Compose, managing persistent storage via volumes, and understanding network isolation and communication. The session concludes with an in-depth overview of Kubernetes architecture, including control plane components (API server, scheduler, ETCD), worker node agents (kubelet, kube-proxy), and core concepts like pods, self-healing, service discovery, and auto-scaling. The goal is to equip learners with foundational skills to deploy, manage, and scale containerized applications in enterprise environments.

Topic (Timeline)

1. Docker Image Tagging and Pushing to Docker Hub [00:00:05.920 - 00:02:36.460]

  • Demonstrated the process of tagging a local Docker image using docker tag <image> <dockerhub-username>/<repo-name>:<tag> (e.g., latest).
  • Showed how to push the tagged image to Docker Hub using docker push <dockerhub-username>/<repo-name>:<tag>, emphasizing that the image name is not required after the push command.
  • Confirmed successful push by refreshing the Docker Hub repository to verify the new image appears.
  • Highlighted that once pushed, the image becomes accessible to others in the organization for reuse, enabling standardized deployment workflows.

2. Docker Compose: Multi-Container Application Setup [00:02:39.080 - 00:14:07.870]

  • Introduced Docker Compose as a tool to define and run multi-container applications using a single YAML file.
  • Used WordPress + MySQL as a practical example, where both services must run together with persistent data.
  • Walked through installing Docker Compose via direct download, setting execute permissions, and verifying the version with docker compose version.
  • Created a wordpress directory and a docker-compose.yml file defining:
    • services: db (MySQL) and wordpress
    • volumes: mapping host directory (e.g., ./db_data) to container path for persistent data
    • environment: database credentials (e.g., MYSQL_ROOT_PASSWORD, WORDPRESS_DB_PASSWORD)
    • depends_on: ensuring MySQL starts before WordPress
    • ports: mapping host port 8000 to container port 80
  • Ran the stack with docker compose up -d, which pulled images, created containers, and started services.
  • Verified deployment using docker images and docker ps, confirming containers are running.
  • Accessed the WordPress frontend via http://<public-ip>:8000 or http://localhost:8000.

3. Docker Compose Management and Volumes [00:14:07.870 - 00:20:09.740]

  • Demonstrated Docker Compose commands: docker compose images, docker compose ps, and docker compose logs to inspect running services.
  • Used docker compose down to stop and remove containers without deleting volumes.
  • Explored volume persistence: data stored in ./db_data on the host remained intact after down.
  • Showed how to remove volumes and containers together using docker compose down --volumes, which deletes the volume directory under /var/lib/docker/volumes/.
  • Clarified that volumes are managed by Docker and are platform-agnostic (Linux/Windows), unlike bind mounts.
  • Emphasized that volumes are the preferred method for persistent data in containerized applications.

4. Docker Networking: Bridge, User-Defined, Host, and None [00:20:13.440 - 00:53:45.110]

  • Reviewed default Docker networks: bridge, host, and none using docker network ls and docker network inspect bridge.
  • Explained that containers on the default bridge network receive IPs from 172.17.0.0/16 and can communicate via gateway.
  • Demonstrated inter-container communication using docker exec to enter a container and ping to test connectivity.
  • Created a user-defined bridge network with custom subnet and gateway: docker network create --subnet=192.168.0.0/16 --gateway=192.168.0.1 network-one.
  • Launched containers on the custom network and confirmed IP assignment via docker inspect.
  • Explained docker network connect to allow a container to join multiple networks (e.g., connecting a container from network-one to network-two).
  • Described host network mode: containers share the host’s network stack, bypassing Docker’s network isolation.
  • Introduced none network: creates a completely isolated container with no network interface, used for security-sensitive workloads.
  • Confirmed isolation by running a container with --network none and verifying no IP or gateway in docker inspect.

5. Container Orchestration: Introduction to Docker Swarm and Kubernetes [00:53:49.070 - 01:30:37.950]

  • Defined container orchestration as managing multiple Docker hosts (nodes) via a central manager to ensure high availability and scalability.
  • Contrasted single-host Docker with multi-node orchestration: master/manager node controls worker nodes running application containers.
  • Introduced Docker Swarm as Docker’s native orchestration tool, activated via docker swarm init, which generates a join token for worker nodes.
  • Noted Docker Swarm’s limited features and recommended Kubernetes for production use.
  • Explained Kubernetes architecture components:
    • Control Plane: API server, scheduler, controller manager, ETCD (key-value store for cluster state).
    • Worker Nodes: kubelet (agent that creates pods), kube-proxy (network rules), container runtime (e.g., Docker, containerd).
  • Discussed the need for Layer 3 overlay networks (e.g., Flannel, Calico) to enable cross-node pod communication, as default bridge networks are local-only.
  • Introduced the concept of static service IPs via Kubernetes Services, enabling stable internal/external access despite dynamic pod IPs.
  • Explained labels and node affinity to control where workloads are scheduled (e.g., assigning DB pods to SSD nodes).
  • Described self-healing: if a pod fails, the controller manager automatically recreates it.

6. Kubernetes Deep Dive: Architecture, Components, and Workflow [01:30:37.950 - 01:42:07.810]

  • Defined a Pod as the smallest deployable unit in Kubernetes, which can contain one or more tightly coupled containers sharing network and storage.
  • Emphasized that Kubernetes does not manage containers directly—it manages Pods.
  • Explained key Kubernetes features:
    • Rollouts & Rollbacks: Gradual deployment of new versions with automatic fallback on failure.
    • Auto-scaling: Horizontal scaling of Pods based on CPU/memory usage or custom metrics.
    • Service Discovery & Load Balancing: Services provide stable IPs and distribute traffic across Pods.
    • Self-Healing: Automatic restart of failed containers or Pods.
  • Described the Kubernetes control flow:
    1. User runs kubectl run → request sent to API server.
    2. API server updates ETCD and notifies scheduler.
    3. Scheduler selects a worker node based on resource/label constraints.
    4. API server instructs kubelet on the node to create the Pod.
    5. Kubelet pulls image and starts containers.
  • Highlighted that kube-proxy manages network rules (iptables/IPVS) and that CNI plugins (e.g., Flannel) provide overlay networking.
  • Noted that ETCD can be part of the control plane or external for high availability.
  • Stressed that in production, multiple master nodes (3+) are used for redundancy, while worker nodes run application workloads.

7. Kubernetes Advantages, Use Cases, and Next Steps [01:42:07.810 - 01:44:39.170]

  • Listed reasons for Kubernetes adoption:
    • Infrastructure abstraction and portability.
    • Immutable deployments and reproducible configurations via YAML.
    • Integration with CI/CD tools (ArgoCD), monitoring (Prometheus, Grafana), and package managers (Helm).
    • Scalability for high-traffic scenarios (e.g., live sports events, news sites).
  • Emphasized that Kubernetes enables declarative infrastructure: desired state defined in code (YAML), not imperative commands.
  • Noted that enterprises (e.g., Booking.com, Tinder) use Kubernetes for its stability, scalability, and compliance.
  • Previewed next session: hands-on Kubernetes setup using 3 VMs (1 master, 2 workers), kubectl usage, namespaces, deployments, and storage.
  • Advised learners to practice Linux basics, especially vi/vim editor commands (insert, save, exit) for editing YAML files in upcoming labs.

Appendix

Key Principles

  • Containers are ephemeral: Data must be persisted using volumes or external storage.
  • Docker Compose is ideal for local development and simple multi-service apps.
  • Kubernetes is required for production-scale, resilient, and auto-scaling deployments.
  • YAML is the standard for declarative configuration in both Docker Compose and Kubernetes.
  • Network isolation is critical: use none for security, user-defined networks for controlled communication, and overlay networks for cross-node pod communication.

Tools Used

  • Docker CLI: docker build, docker tag, docker push, docker run, docker ps, docker images
  • Docker Compose: docker compose up, docker compose down, docker compose ps, docker compose logs
  • Docker Networking: docker network ls, docker network inspect, docker network create, docker network connect
  • Kubernetes (Conceptual): kubectl, ETCD, kubelet, kube-proxy, API server, scheduler, CNI plugins (Flannel, Calico)

Common Pitfalls

  • Forgetting to set execute permissions on downloaded docker-compose binary.
  • Using localhost instead of public IP to access services from external clients.
  • Misconfiguring YAML indentation, which causes docker compose up or kubectl apply to fail.
  • Assuming default bridge networks allow cross-host communication (they do not).
  • Running docker compose down without --volumes and losing persistent data.
  • Not using labels or node affinity to control pod placement in multi-node environments.

Practice Suggestions

  • Rebuild and push a custom Docker image to Docker Hub.
  • Create a Docker Compose file for a 3-service app (e.g., Nginx + Node.js + Redis).
  • Experiment with docker network connect to link containers across two user-defined networks.
  • Use docker inspect to explore network and volume configurations.
  • Write a simple Kubernetes Pod YAML file to deploy an Nginx container.
  • Practice vi commands: i (insert), Esc, :wq (save and quit), :q! (quit without saving).