Summary
Overview
This course session is the first day of a three-day training on building cloud-native applications using Spring Boot, Docker, and Kubernetes. The session begins with introductions of the instructor and participants, followed by foundational concepts of microservices, containerization, and Kubernetes orchestration. The core objective is to transition from monolithic Spring Boot applications to cloud-native architectures using containers and Kubernetes. The session includes hands-on labs using Docker and Minikube to deploy, manage, and observe containerized applications, covering key topics such as image building, pod lifecycle, service discovery, ConfigMaps, secrets, scaling, and the 12-factor app methodology. The session concludes with an introduction to Kubernetes primitives and an initial deployment of a sample microservice architecture.
Topic (Timeline)
1. Introduction and Course Overview [00:08:11 - 00:16:16]
- Instructor Andrés Mujica introduces himself and the training goals: modernizing Spring Boot applications into cloud-native microservices using Docker and Kubernetes.
- Participants Jonathan Rodríguez and Vladimir introduce themselves, sharing their backgrounds as developers with 7+ years at the bank, transitioning from frontend roles to technical leadership.
- Training schedule outlined: 9 AM–4 PM with two coffee breaks, certification upon completion.
- Primary objective: Convert the Spring Pet Clinic monolith into a cloud-native microservice architecture using Kubernetes, with eventual integration of Istio for service mesh capabilities.
- Emphasis on collaborative learning, real-world experience sharing, and hands-on labs.
2. Environment Setup and Platform Access [00:16:16 - 00:25:24]
- Participants are guided to connect to a pre-configured cloud-based development environment via a provided link.
- Authentication using username “student” and password “student” is required to access the terminal and VS Code interface.
- Environment structure explained: Three lab directories (02, 03), README files for guidance, and a materials folder containing slides and supplementary resources (12-Factor App manifesto, microservices eBooks).
- Nearpod interactive session initiated for real-time polling; participants join as guests with their names.
- Terminal features highlighted: Horizontal/vertical split for monitoring logs and debugging during labs.
- Spring Pet Clinic application introduced as the target for the final lab (Day 3).
3. Core Concepts: Spring Boot Configuration and Microservices [00:25:24 - 00:46:07]
- Interactive quiz on Spring Boot configuration: Correct answer — uses annotations and external properties (application.properties/yaml) for auto-configuration, following “convention over configuration.”
- Explanation of Spring Boot auto-configuration: Dependencies in POM trigger automatic bean creation (e.g., Spring Boot Starter Web → embedded Tomcat).
- Contrast with traditional Spring Framework (XML-based config) and Java EE.
- Microservices rationale: Why split a feature into its own service? Answer — autonomy: independent development, deployment, scaling, and evolution aligned with bounded contexts from Domain-Driven Design.
- Microservice characteristics: Single responsibility, loose coupling via APIs (not shared databases), polyglot technology support (e.g., Python for AI), CI/CD enablement.
- Service communication: Asynchronous, API-based, with fault tolerance (e.g., circuit breakers, retries).
- Emphasis on business-aligned boundaries (e.g., Pet Clinic: owners, vets, visits as separate services).
4. Spring Cloud and Service Discovery [00:46:07 - 00:53:14]
- Quiz on Spring Cloud components: Correct answer — Netflix Eureka for service discovery.
- Explanation of service discovery: In distributed systems, services must dynamically locate each other (e.g., frontend finds backend services).
- Spring Cloud Eureka acts as a service registry; services register and query it for endpoints.
- Contrast with Kubernetes: DNS-based service discovery replaces Eureka; services are accessed via stable DNS names (e.g.,
service-name.namespace.svc.cluster.local). - Other Spring Cloud components: Config Server (centralized config), Gateway (API routing), Resilience4J (circuit breaker, retries).
5. Containerization with Docker [00:53:14 - 01:20:36]
- Purpose of Dockerfile: Define build steps to create an immutable container image with app, dependencies, and runtime.
- Docker image layers: Base OS (e.g., OpenJDK), app JAR, and execution command; layers are cached for efficiency.
- Buildpacks introduced: Spring Boot’s
build-imagecommand automates image creation without Dockerfile, using Paketo/Buildpacks. - OCI (Open Container Initiative) standard: Docker is a tool; OCI defines runtime/image standards used by Podman, Kubernetes.
- Docker vs. Podman: Podman runs without a daemon, more secure; Kubernetes uses OCI runtimes (e.g., containerd), not Docker.
- Container benefits: Smaller attack surface, reduced resource usage (no full OS), portability, immutability (restarting removes compromises).
- Lab: Run
hello-worldandubuntuimages; inspect layers withdocker history. - Create custom image: Dockerfile with
FROM ubuntu,RUN apt update && apt install figlet,CMD ["figlet", "Hola Chile"]. - Image tagging:
docker tag fishlet:latest andresmujica/fishlet:latestfor push to Docker Hub. - Push to Docker Hub: Login, push, then pull and run from another location to demonstrate portability.
6. Kubernetes Fundamentals and Architecture [01:20:36 - 01:56:34]
- Why Kubernetes? Manual Docker orchestration doesn’t scale; Kubernetes automates deployment, scaling, healing, networking.
- Kubernetes architecture: API Server (central control), etcd (distributed config store), Scheduler, Controller Manager, Worker Nodes (with kubelet, container runtime, kube-proxy).
- Core resources: Pod (smallest deployable unit, contains 1+ containers), Deployment (desired state), Service (stable network endpoint), ConfigMap (non-secret config), Secret (base64-encoded secrets), ReplicaSet (ensures pod count).
- Pod patterns: Sidecar (e.g., log forwarder), Ambassador (e.g., cache proxy), Init Container (e.g., data sync before app start).
- Service Discovery: Services use labels to select pods; IP is ephemeral, service name is stable (e.g.,
backend-service). - Exposing services: ClusterIP (internal), NodePort (external via node IP:port), LoadBalancer (cloud provider), Ingress (HTTP routing by hostname).
- Self-healing: Kubernetes restarts failed pods automatically.
- Stateful vs. Stateless: Stateless services (e.g., frontend) can be replaced; stateful (e.g., DB) need persistent storage.
7. Hands-On: Deploying on Minikube [01:56:34 - 07:37:42]
- Deploy Minikube cluster:
minikube start,minikube dashboardfor UI. - Create namespace:
kubectl create ns ostoy, switch context withkubens ostoy. - Deploy backend and frontend using YAML manifests:
- Backend: Deployment + Service (ClusterIP).
- Frontend: Deployment, Service (NodePort), ConfigMap, Secret, PVC (Persistent Volume Claim).
- Key adjustments: Comment out
storageClassin frontend manifest to avoid cluster incompatibility. - Validate deployment:
kubectl get all,kubectl get pods -w,kubectl get events -w. - Access app: Use
kubectl get nodes -o wideto get Minikube IP,kubectl get servicesto get NodePort (e.g., 30888). - Interact with pods:
kubectl logs frontend-...,kubectl exec -it frontend-... -- sh. - Persistent storage test: Create file in
/var/demo/filesvia shell → file persists after pod restart. - ConfigMap & Secret updates: Edit via dashboard or CLI → changes reflected in running pod (ConfigMap/Secret) or require restart (env vars).
- Scaling:
kubectl scale deployment frontend --replicas=0then--replicas=1. - Horizontal Pod Autoscaler (HPA): Configure to scale backend when CPU >50% of requested limit (50m).
- Test HPA: Run load script → pods auto-scale up; when load drops, pods scale down automatically.
- Emphasis: Kubernetes reconciles desired state → any manual pod change is reverted.
Appendix
Key Principles
- 12-Factor App Methodology: The entire lab setup (ConfigMaps, Secrets, ephemeral storage, scaling, service discovery) implements 12-factor principles: config in environment, stateless processes, logs as event streams, one codebase with multiple deploys, etc.
- Cloud-Native Stack: Microservices (business logic) + Containers (packaging) + Kubernetes (orchestration) = Cloud-Native.
- Platform vs. Application Responsibility: Kubernetes handles resilience, scaling, discovery, and config; developers focus on business logic, not infrastructure plumbing.
Tools Used
- Docker / Podman: Container runtime and image building.
- Minikube: Local Kubernetes cluster for development.
- kubectl: CLI for interacting with Kubernetes API.
- VS Code + Container Tools Extension: For editing Dockerfiles and YAML manifests.
- Docker Hub: Public registry for image storage (used for demo; private registries recommended in production).
- Kubernetes Dashboard: Web UI for visualizing cluster state.
Common Pitfalls
- Secrets are not encrypted: Base64 encoding ≠ encryption; use HashiCorp Vault for sensitive data.
- ConfigMap/Secret reload: Environment variables are injected at pod start; changes require pod restart. ConfigMaps/Secrets mounted as volumes are dynamically reloaded.
- StorageClass mismatch: Not all clusters have default storage classes; always verify or comment out if unsupported.
- Docker in Kubernetes: Kubernetes does not use Docker; it uses OCI-compliant runtimes (containerd, CRI-O).
Practice Suggestions
- Rebuild the fishlet image with different entrypoints and commands to observe behavior differences.
- Modify ConfigMap/Secret values and observe how the app responds (with and without restart).
- Scale the backend deployment to 10 pods and monitor with
kubectl top pods. - Use
kubectl explain <resource>to explore API structure (e.g.,kubectl explain deployment.spec). - Practice deleting pods manually and observe Kubernetes self-healing behavior.
- Explore the Minikube dashboard to map UI elements to API resources (e.g., “Deployments” →
apps/v1/deployments).