10 videos 📅 2025-06-26 09:00:00 America/New_York
2:14:39
2025-06-26 09:07:32
1:12:32
2025-06-26 09:11:34
6:42
2025-06-26 11:08:41
35:51
2025-06-26 11:24:37
38:41
2025-06-26 13:21:35
20:37
2025-06-26 15:06:35
51:46
2025-06-27 09:06:19
58:45
2025-06-27 09:06:25
36:01
2025-06-27 11:26:09
1:12:38
2025-06-27 13:45:09

Visit the Kubernetes Comprehensive 2-Day course recordings page

                WEBVTT

00:00:00.000 --> 00:00:02.000
Should come up here shortly.

00:00:03.660 --> 00:00:05.560
Okay, so namespaces.

00:00:06.120 --> 00:00:11.260
So lesson two, we're going to learn to isolate resources effectively using namespaces, taints, and tolerations.

00:00:12.380 --> 00:00:16.700
K.H. uses namespaces to isolate resources within an individual cluster.

00:00:17.600 --> 00:00:22.460
The names for resources within a cluster need to be unique within a namespace.

00:00:23.740 --> 00:00:29.980
Although the cluster may contain a default namespace, it is generally good practice to use a unique namespace.

00:00:30.000 --> 00:00:35.600
for application workloads that is not in either the default or Kube system namespaces.

00:00:35.600 --> 00:00:42.080
Kube system namespace is generally used for control plane component. For example, when deploying

00:00:42.080 --> 00:00:47.440
a logging service, it is good practice to create a new namespace for that's correct. So,

00:00:47.440 --> 00:00:53.360
or if you have multiple databases that are deployed with an operator. So you might have an operator. In

00:00:53.360 --> 00:00:59.840
this case, we have an operator called Minio, which is an S3 open source, and then

00:00:59.840 --> 00:01:07.360
it spawns two tenants. One is called Loki Minio, the other is GitLab Minio. And in this case,

00:01:08.880 --> 00:01:14.960
it's considered good practice to have three separate namespaces for that. So one for each tenant

00:01:14.960 --> 00:01:22.640
database and one for the operator. And that can really save you time on troubleshooting. It's going

00:01:22.640 --> 00:01:29.680
to enable separation between your your applications. Okay, namespace.

00:01:29.840 --> 00:01:38.520
constraints. So K-8 allows you to define default memory and CPU requests or

00:01:38.520 --> 00:01:43.700
limits for namespace. You can define a range of minimum and maximum memory and

00:01:43.700 --> 00:01:49.400
CPU constraints. This can be useful when using a helm chart that lacks a

00:01:49.400 --> 00:01:53.600
resource selection in the values that YAML file. And there are a lot of them out

00:01:53.600 --> 00:01:58.800
there that still lack that because the upstream developers don't have a Helm

00:01:58.800 --> 00:01:59.760
cap.

00:01:59.840 --> 00:02:06.880
so they publish a helm chart but it lacks a lot of the uh a lot of what we expect to see

00:02:06.880 --> 00:02:12.560
and a values that yamlphod you can define an overall memory and cp ucto you can use this to assign

00:02:12.560 --> 00:02:20.560
a maximum amount of resources for a namespace this could be useful to define a single

00:02:20.560 --> 00:02:28.560
path limit the default namespace development cluster this would trigger an error when devops

00:02:28.560 --> 00:02:35.560
personnel try to deploy a resource in default namespace instead of creating a unique namespace.

00:02:35.560 --> 00:02:37.560
Namespaces can be labeled just like nodes.

00:02:37.560 --> 00:02:42.560
So just like how we labeled our nodes a little while ago, we can label namespaces.

00:02:42.560 --> 00:02:49.560
And that's useful for features such as the new Gateway API to enable shared access.

00:02:49.560 --> 00:02:54.560
Additional feature sets in K-8s may require labels to work across namespaces.

00:02:54.560 --> 00:02:57.560
And you can allow one namespace to work with a service from another namespace.

00:02:57.560 --> 00:03:03.160
other namespace and then one other namespaces can be deleted with a simple cute control command

00:03:04.360 --> 00:03:12.520
to be careful using this in production um is it will delete all resources related to that

00:03:12.520 --> 00:03:17.880
namespace and the ability to create to delete a namespace can be useful when testing a cluster

00:03:18.280 --> 00:03:24.280
sometimes a faulty pod is stuck and it will take several moments to spin down or may not

00:03:24.280 --> 00:03:27.400
spin down at all if the namespace isn't being used for the

00:03:27.400 --> 00:03:32.200
resources it can just be deleted and recreated saving you valuable time right we have a

00:03:32.200 --> 00:03:39.320
practical application let's get all the namespaces in the cluster and we want to go out of that

00:03:39.320 --> 00:03:47.560
folder so just cd you want to get back to our um our root yeah there you go less see what's in there

00:03:47.560 --> 00:03:51.880
All right, so we have our namespaces default,

00:03:51.880 --> 00:03:54.600
Cube node lease, Cube Public, and Cube System.

00:03:54.600 --> 00:03:58.760
Let's describe the namespace default.

00:03:58.760 --> 00:04:02.280
So this has no constraints on it.

00:04:02.280 --> 00:04:10.360
And so it has no labels as far as using it with Gateway API

00:04:10.360 --> 00:04:11.320
for shared app.

00:04:11.320 --> 00:04:16.600
All right, so now we're going to create a namespace

00:04:16.600 --> 00:04:25.240
call app a and query it and you can see that it's there we can describe it and note the

00:04:25.240 --> 00:04:31.640
little metadata for the name name equals that okay let's label the namespace we just created with a

00:04:31.640 --> 00:04:38.440
development link and check and see if it exists there you go and now we're going to delete

00:04:38.440 --> 00:04:43.960
the namespace and verify that it's been we have these these VMs sometimes also have a slight

00:04:43.960 --> 00:04:52.230
delay in them okay so now we're going to create a pod deployment in the app a namespace and apply

00:04:52.230 --> 00:05:02.070
so then pop dash deployment. yammall let's find let's find out let's save it and then we'll

00:05:02.070 --> 00:05:10.630
apply it um yeah so it's apply minus f and then it's the same name dot yamil that you just

00:05:10.630 --> 00:05:12.630
Use the VEM on to edit.

00:05:14.630 --> 00:05:16.630
There you go.

00:05:16.630 --> 00:05:19.630
NameSpace app A is not found.

00:05:19.630 --> 00:05:21.630
So now we're going to create namespace app.

00:05:21.630 --> 00:05:25.630
We're going to delete the pod before we try to apply it again.

00:05:25.630 --> 00:05:29.630
Well, actually, no, we don't need to delete it because

00:05:29.630 --> 00:05:31.630
they didn't apply it.

00:05:31.630 --> 00:05:33.630
So just, you know, reapply it.

00:05:33.630 --> 00:05:35.630
And then check the fog.

00:05:35.630 --> 00:05:38.630
Okay, now here comes the fun part.

00:05:38.630 --> 00:05:40.630
Keep control delete namespace epa.

00:05:40.630 --> 00:05:43.630
And just a little pause.

00:05:43.630 --> 00:05:45.630
And I'm sure you can guess what.

00:05:45.630 --> 00:05:46.630
Pause is taking place.

00:05:46.630 --> 00:05:47.630
It's gone.

00:05:47.630 --> 00:05:48.630
Let's check the namespace.

00:05:48.630 --> 00:05:50.630
And that is gone as well.

00:05:50.630 --> 00:05:53.630
So you can see it deletes everything within that namespace.

00:05:53.630 --> 00:05:55.630
It's all gone.

00:05:55.630 --> 00:05:56.630
Non-recoverable.

00:05:56.630 --> 00:06:01.630
But it's great for development and testing when you're using complex helm charts

00:06:01.630 --> 00:06:03.630
and you have pods that are stuck.

00:06:03.630 --> 00:06:06.630
And, you know, I've worked with complex helm charts.

00:06:06.630 --> 00:06:08.630
It can take five, six, six,

00:06:08.630 --> 00:06:12.130
minutes to spin down after a mistake.

00:06:12.130 --> 00:06:14.510
And just delete the namespace, it's all gone.

00:06:16.550 --> 00:06:18.270
All right, so taints, what are taints?

00:06:18.270 --> 00:06:21.370
Yeah, no, once you delete it,

00:06:21.370 --> 00:06:24.970
you deleted the deployment and everything in that namespace.

00:06:24.970 --> 00:06:27.190
So your file still exists.

00:06:27.190 --> 00:06:29.310
So if you do LS, your file is still there,

00:06:29.310 --> 00:06:32.600
so you could recreate the pod.

00:06:32.600 --> 00:06:36.200
But the actual pod deployment itself

00:06:36.200 --> 00:06:38.160
was deleted from the Kubernetes.

00:06:38.160 --> 00:07:08.140
All right. So let's go on. I added taints and tolerations to this section because it's part of isolating resources. And it wasn't in the description, but it's very. All right. So node selectors are a type of node affinity and that may be used to schedule a pod on a specific node. It allows pods to be attracted on two specific nodes. Taints are the opposite. They're

00:07:08.160 --> 00:07:11.760
They allow a node to repel a pod.

00:07:11.760 --> 00:07:15.220
Taints are applied directly to the node.

00:07:15.220 --> 00:07:17.500
Taints are viewable in the node resources.

00:07:17.500 --> 00:07:24.160
Tolerations allow a pod to be scheduled to a node with a matching taint.

00:07:24.160 --> 00:07:27.680
Colerations are applied to pods.

00:07:27.680 --> 00:07:29.400
Tolerations allow scheduling.

00:07:29.400 --> 00:07:34.780
They don't guarantee scheduling if other conditions are not met.

00:07:34.780 --> 00:07:40.180
work with taints to ensure pods are not scheduled onto inappropriate. So we're going to

00:07:40.180 --> 00:07:46.860
deploy a pod to a tainted node to taint the control plane node. But before you do that, let's

00:07:46.860 --> 00:07:53.820
describe the control plane. And let's scroll up to where it says taints. So yeah, I think,

00:07:54.460 --> 00:08:01.020
there you go, a little bit more, I think, a little bit more, I guess. What right there.

00:08:01.020 --> 00:08:02.240
Oh, okay, right go.

00:08:02.880 --> 00:08:04.420
Say, yeah, okay.

00:08:04.540 --> 00:08:07.220
So now we are going to, and if you use it in or.

00:08:07.460 --> 00:08:09.540
And we're going to taint that node.

00:08:10.500 --> 00:08:13.240
There we go, pod equals true, no schedule.

00:08:13.380 --> 00:08:17.840
So now we change it from none to taints, pop equals true, no schedule.

00:08:19.680 --> 00:08:21.620
Okay, copy the previous pod deployment.

00:08:21.620 --> 00:08:27.600
That yam will make it easy on you to a pod dash taint.com.

00:08:27.980 --> 00:08:29.500
We don't have to type the whole thing out.

00:08:29.500 --> 00:08:35.000
I'm going to add a node selector to the pod dash taint yamil file and apply it.

00:08:35.260 --> 00:08:38.940
And we're going to use a node selector that is already on every control plane.

00:08:39.920 --> 00:08:46.680
And it is node roll Kubernetes I.O. Control plane. True. Yep. All control plane nodes. All

00:08:46.680 --> 00:08:51.320
and we're going to apply it. Okay. So we don't need to delete it.

00:08:53.000 --> 00:08:59.480
So create the namespace and then apply the pod again. Okay now to control give pods. And that name

00:08:59.500 --> 00:09:04.940
space and says pending. Why is it? Let's look at it. Let's describe that pod. So we tainted the node,

00:09:05.400 --> 00:09:13.000
and now the node has an untolerated taint. So it has a taint. The pod needs a toleration

00:09:13.000 --> 00:09:19.100
to attach to the tainted node. So remember when we tainted the node, and now it has a taint,

00:09:19.100 --> 00:09:26.300
which allows it to repel pods, right? And so now we need to add a toleration and the specs that

00:09:26.300 --> 00:09:29.480
again, delete the pod. If you notice, there was no toleration in there. You

00:09:29.500 --> 00:09:32.960
You can see in your YAML file under node selectors.

00:09:32.960 --> 00:09:36.140
You have your node selector, load out your tolerations,

00:09:36.140 --> 00:09:39.600
but there's no toleration for what we added in.

00:09:39.600 --> 00:09:43.440
Okay, so now, the previous pod taint.Yamel

00:09:43.440 --> 00:09:45.040
to pod tolerated.com.

00:09:45.040 --> 00:09:48.440
We're gonna add a toleration to the pod tolerated.

00:09:48.440 --> 00:09:49.440
YAML file.

00:09:49.440 --> 00:09:53.380
Pull that back up and we'll take a look at the file here.

00:09:53.380 --> 00:09:59.280
You have a more recent version of the Kubernetes API

00:09:59.280 --> 00:10:04.280
by dot two, then what I used here for mini cubes.

00:10:05.720 --> 00:10:07.460
Let's see if we can figure this out.

00:10:07.460 --> 00:10:10.660
That is pod-collarated.

00:10:12.660 --> 00:10:14.160
Yeah, you can look at something else.

00:10:15.080 --> 00:10:17.400
I'm gonna try and see if I can figure out what,

00:10:17.400 --> 00:10:21.400
okay, let's try to run that again.

00:10:21.400 --> 00:10:23.960
And look at that, something.

00:10:23.960 --> 00:10:25.400
Huh, interesting.

00:10:25.400 --> 00:10:28.080
Yeah, I don't have that error on my version of mini cubes,

00:10:28.080 --> 00:10:29.080
so I don't know.

00:10:29.280 --> 00:10:35.280
what's going on, but you have a newer version because they just created your workspace.

00:10:35.280 --> 00:10:38.280
So I tested these out and they worked.

00:10:38.280 --> 00:10:47.280
So I don't see an error in what you typed unless we just have a miss spelling.

00:10:47.280 --> 00:10:51.280
Okay, let's try something here.

00:10:51.280 --> 00:10:54.280
Okay, so let's take a look at the node and get the label.

00:10:54.280 --> 00:10:55.280
Okay.

00:10:55.280 --> 00:10:58.280
Take a look at the control plane label.

00:10:58.280 --> 00:10:59.280
Okay.

00:10:59.280 --> 00:11:00.980
Notice where it ends with equal.

00:11:00.980 --> 00:11:02.900
And we need a true at the end of it.

00:11:02.900 --> 00:11:05.820
So let's go ahead and add a true.

00:11:05.820 --> 00:11:09.700
Let's the Cube Control label node mini cube.

00:11:09.700 --> 00:11:14.200
And type that out with equals true.

00:11:14.200 --> 00:11:15.200
Yep.

00:11:15.200 --> 00:11:19.080
All right, so your pod won't be running.

00:11:19.080 --> 00:11:22.560
I don't believe, but we can try to apply it again.

00:11:22.560 --> 00:11:25.600
But I don't think this will fix it.

00:11:25.600 --> 00:11:28.800
Try to apply the pod again, but I don't think it's going to run.

00:11:28.800 --> 00:11:33.840
something missing with your version of mini cube something changed or we have an error in that file

00:11:33.840 --> 00:11:40.880
somehow and i'm looking at my version which runs just fine and yours appears to be exactly the same

00:11:40.880 --> 00:11:50.880
okay yeah one last time pull up um actually let me check one thing shut my minicube

00:11:50.880 --> 00:12:09.040
all right that is pod all the rated dynamo so i'm looking at mine let's see no schedule

00:12:09.200 --> 00:12:10.200
Not equal.

00:12:10.200 --> 00:12:12.200
Node selector.

00:12:12.200 --> 00:12:15.200
No, the whole thing is a little claim.

00:12:15.200 --> 00:12:17.200
Tolerations, little case.

00:12:17.200 --> 00:12:20.200
Oh, I see what's missing here.

00:12:20.200 --> 00:12:21.200
Okay.

00:12:21.200 --> 00:12:25.200
This, uh, do you see what's missing?

00:12:25.200 --> 00:12:27.200
This is my bad.

00:12:27.200 --> 00:12:30.200
I think I, I made the mistake on this one.

00:12:30.200 --> 00:12:31.200
No?

00:12:31.200 --> 00:12:32.200
Uh, here, let me go back.

00:12:32.200 --> 00:12:33.200
Sorry.

00:12:33.200 --> 00:12:38.200
I think I, I goof, but this is a good lesson, actually.

00:12:38.200 --> 00:12:43.860
So, sorry, my slides are stuck, I'm going to show you here.

00:12:46.260 --> 00:12:56.280
Okay, under tolerations, where you see effect, okay, look at containers and look at where

00:12:56.280 --> 00:12:56.920
name starts.

00:12:56.920 --> 00:13:02.140
See how it's a hyphen in front of it, but the hyphen needs to be under the L for tolerations.

00:13:03.260 --> 00:13:07.360
So we need a hyphen in front of effect underneath the L.

00:13:07.360 --> 00:13:11.100
So instead of the E under the L, we need a hyphen and a space and then move all the

00:13:11.120 --> 00:13:13.280
four of those key, key value here.

00:13:13.280 --> 00:13:14.660
And let's go ahead and fly it.

00:13:14.660 --> 00:13:16.880
Okay, now we'll go ahead and describe that.

00:13:16.880 --> 00:13:21.220
Pod minus N-I.

00:13:21.220 --> 00:13:23.200
So now if we look at our tolerations,

00:13:23.200 --> 00:13:25.600
we can see pod equals true and no schedule.

00:13:25.600 --> 00:13:27.140
Okay, and so that matches,

00:13:27.140 --> 00:13:28.500
and if we pull up the node,

00:13:28.500 --> 00:13:30.900
yep, describe the node.

00:13:30.900 --> 00:13:34.440
I'll see that that matches the tent goal to the top.

00:13:34.440 --> 00:13:35.480
There we go, taint.

00:13:35.480 --> 00:13:37.760
Oh, let's go back down what it is, a little more.

00:13:37.760 --> 00:13:39.280
There we go, taints,

00:13:39.280 --> 00:13:40.800
pod equals true and no schedule.

00:13:40.800 --> 00:13:42.820
So you can need to you where you can taint the node,

00:13:42.840 --> 00:13:50.140
you tolerate the pod, otherwise the pod bounces from the node, even though we selected the node

00:13:50.140 --> 00:13:56.500
properly with the node. We've now deployed a pod using namespace, note selector, taints, and tolerations

00:13:56.500 --> 00:14:02.980
on a specific type of node. We can pin it to control plane, or we can make sure if something

00:14:02.980 --> 00:14:09.140
doesn't have that toleration, it is automatically rejected from control plane. Now, can you think

00:14:09.140 --> 00:14:16.140
of why that might be used in a production. Exactly. Or if you want to pin it to a storage

00:14:16.140 --> 00:14:21.680
node, for example, right? So your workloads, stateless workloads, if you're running

00:14:21.680 --> 00:14:29.180
stateless workloads, should stay within their worker or agent nodes. And not a spray

00:14:29.180 --> 00:14:34.160
and prey effect, which is how, unfortunately, some developers use Kubernetes as they just

00:14:34.160 --> 00:14:39.120
deployed across all nodes. And then you end up with resource constraints on a control.

00:14:39.140 --> 00:14:46.580
like note for example so all right and now we're going to destroy the namespace f a and hold of its

00:14:46.580 --> 00:14:53.460
resources in development yes it's a yeah it's a quicker way to just delete everything in that namespace

00:14:53.460 --> 00:15:02.980
yeah and we'll verify that's been destroyed all right and lesson two we learned how to use namespaces

00:15:02.980 --> 00:15:08.660
to separate resources how namespaces can have constraints for memory and CPEs and

00:15:08.660 --> 00:15:21.480
CPU, how namespaces can be labeled, how to apply taints to a node to repel pods, how to apply

00:15:21.480 --> 00:15:30.780
tolerations to a pod to enable scheduling to a node, and how to destroy a namespace and all of its

00:15:30.780 --> 00:15:37.520
resources. Yes, yes. So I don't have it in front me. I think that's four, less than four or five.

00:15:38.660 --> 00:15:46.700
All right, here and start lesson three.

00:15:46.700 --> 00:15:49.280
Yeah, you feel like you're picking it up.

00:15:49.280 --> 00:15:52.140
So there are a couple of ways to learn Kubernetes.

00:15:52.140 --> 00:15:53.940
It's a steep learning curve.

00:15:53.940 --> 00:16:06.580
And there's a lot of information out there

00:16:06.580 --> 00:16:08.740
on Kubernetes.

00:16:08.740 --> 00:16:13.680
So what I do is because I train with practical Kubernetes,

00:16:13.680 --> 00:16:15.660
so it's comprehensive but practical,

00:16:15.660 --> 00:16:17.780
what you'll encounter day in and day out

00:16:17.780 --> 00:16:22.580
a cluster. And if you become good at what you're going to encounter day in and day out,

00:16:22.580 --> 00:16:28.340
when you run into that one-off that you may see only once or twice, you can take the time to go

00:16:29.140 --> 00:16:35.860
do that deep dive down that rabbit hole and learn that concept. Oftentimes though, those

00:16:35.860 --> 00:16:41.620
extreme concepts in Kubernetes were put in there by, remember when I mentioned three years

00:16:41.620 --> 00:16:47.700
after Kubernetes started, they became cloud native to bring in the cloud engineers and the

00:16:47.780 --> 00:16:52.400
Cloud engineers brought a lot of tools and concepts with them that weren't Kubernetes-related.

00:16:52.820 --> 00:16:54.700
But they had the funding behind them.

00:16:54.760 --> 00:16:56.320
They supported the Kubernetes teams.

00:16:57.040 --> 00:17:02.820
So they have a lot of concepts that they funded to get put into Kubernetes.

00:17:03.200 --> 00:17:05.760
And that may be the only team that uses that concept.

00:17:05.760 --> 00:17:11.280
And so why spend months studying all of that if you're never going to encounter that in a cluster, right?

00:17:12.780 --> 00:17:15.840
So that's kind of my approach to training.

00:17:15.840 --> 00:17:19.620
and then I like to train with errors, because you'll remember the errors.

00:17:20.060 --> 00:17:24.940
If I give you perfectly formatted YAML files and all you do is just apply for six hours,

00:17:25.380 --> 00:17:27.160
you probably won't remember a whole lot.

00:17:27.800 --> 00:17:31.960
So, okay, we're going to get into deployments.

00:17:32.080 --> 00:17:32.460
All right.

00:17:33.380 --> 00:17:38.920
Deployments enable workloads that do not typically maintain state within the workload.

00:17:39.280 --> 00:17:44.680
Setting the desired state of a deployment, the deployment controller changes the state of the deployment

00:17:44.680 --> 00:17:46.280
to match the desired state.

00:17:48.200 --> 00:17:51.840
Deployments enable a replica set to manage the pods.

00:17:53.700 --> 00:17:57.200
The deployment YAML file enables unique labels

00:17:57.200 --> 00:17:58.200
for each deployment.

00:17:59.720 --> 00:18:02.820
And the spec selector field is used by the replica set

00:18:02.820 --> 00:18:06.120
to find the pods to manage.

00:18:06.920 --> 00:18:10.480
And we'll see that this replica set assigns a unique hash

00:18:10.480 --> 00:18:12.140
at the end of the pod name.

00:18:12.900 --> 00:18:14.660
And that allows the replica set to try.

00:18:14.680 --> 00:18:18.660
track individual pods within that specific repo.

00:18:18.660 --> 00:18:23.360
We are going to ensure a fresh mini-cube cluster stop.

00:18:23.360 --> 00:18:24.500
Mini-Cube stop.

00:18:24.500 --> 00:18:28.120
I don't think we want to restart the system doing.

00:18:28.120 --> 00:18:31.440
I think that's why your terminal was delayed earlier.

00:18:31.440 --> 00:18:33.740
Looks like you've been downloading updates without even

00:18:33.740 --> 00:18:35.280
really automated updates.

00:18:35.280 --> 00:18:37.080
I love them.

00:18:37.080 --> 00:18:42.500
And then MiniCube delete, hyphen, hyphen, all.

00:18:42.500 --> 00:18:44.460
Disconnected from the network.

00:18:44.460 --> 00:18:46.220
What in the world?

00:18:46.220 --> 00:18:49.180
Well, okay, hopefully here.

00:18:49.180 --> 00:18:52.460
You do need the network in there to pull containers in,

00:18:52.460 --> 00:18:55.360
but other than you can do it.

00:18:55.360 --> 00:19:00.240
Okay, interesting.

00:19:00.240 --> 00:19:03.680
Yeah, I don't have that online, so I don't know why that is.

00:19:05.200 --> 00:19:07.440
Yeah, so create the EngineX deployment.

00:19:07.440 --> 00:19:08.180
Yomel file.

00:19:08.180 --> 00:19:15.760
Type that in, I will be right back.

00:19:15.760 --> 00:19:19.500
And note how this is different than the pod we were doing.

00:19:19.500 --> 00:19:22.780
This is a deployment with a replica set,

00:19:22.780 --> 00:19:25.460
and we're using a replica set controller,

00:19:25.460 --> 00:19:28.960
and so we need to label everything.

00:19:28.960 --> 00:19:30.480
Yeah, it looks like a good video.

00:19:30.480 --> 00:19:32.940
Where we can, we'll try to reuse some of these files

00:19:32.940 --> 00:19:34.000
once you type them out.

00:19:34.000 --> 00:19:37.740
Oh, we stopped Minicube, but we didn't create a new cluster.

00:19:39.000 --> 00:19:42.700
So that is your cook control working,

00:19:42.700 --> 00:19:48.900
and it is saying, hey, there's no API server available.

00:19:48.900 --> 00:19:53.000
Yep, that's one of the many errors you'll see.

00:19:53.000 --> 00:19:58.720
So the connection refused, there's no API server available.

00:19:58.720 --> 00:20:04.760
Now it doesn't tell you there's no APS.

00:20:04.760 --> 00:20:13.470
That's an area where it would be nice if they had more verbose statements for the errors.

00:20:13.470 --> 00:20:14.530
So a mini-Cube start.

00:20:14.530 --> 00:20:20.290
We're going to actually use that command quite a bit through our practices over the two days.

00:20:20.290 --> 00:20:23.650
And here in a minute, we'll take a brief.

00:20:23.650 --> 00:20:27.850
take a break for lunch and then when we come back after lunch, we'll not the rest of this.

00:20:29.010 --> 00:20:29.770
Lesson out.

00:20:29.890 --> 00:20:30.230
All right.

00:20:30.290 --> 00:20:31.390
Let's get the deployments.

00:20:32.770 --> 00:20:37.130
Set the ready state and let's view the replica set.

00:20:37.130 --> 00:20:37.590
All right.

00:20:38.650 --> 00:20:42.930
And notice a unique hash for the replica set added to the deployment name.

00:20:43.870 --> 00:20:46.790
So that is for that particular replica set.

00:20:46.910 --> 00:20:50.130
Now do the next step will be the view the pods.

00:20:50.330 --> 00:20:52.110
Okay, so we take that one step further.

00:20:52.110 --> 00:20:58.770
So we see the name of the pod, EngineNet's deployment replica set name, which is 6CF.

00:20:59.190 --> 00:21:08.950
But then notice how each pod has a unique five, often-americ hash at the end of that, and that is the pod hash for each one.

00:21:12.090 --> 00:21:14.670
So it applies replica sets apply two hashes.

00:21:16.190 --> 00:21:18.010
All right, let's describe the deployment.

00:21:18.010 --> 00:21:19.510
Notice that we have events there.

00:21:19.510 --> 00:21:25.090
It says scaled up replica set, engine X deployment from zero to three.

00:21:25.490 --> 00:21:28.570
So it started with zero and then scaled it up to three.

00:21:29.830 --> 00:21:31.690
Actually, hang on.

00:21:32.410 --> 00:21:34.730
Yeah, so, okay.

00:21:35.110 --> 00:21:36.510
So it's three pods, not two.

00:21:36.590 --> 00:21:36.910
All right.

00:21:37.670 --> 00:21:39.130
So we describe the deployment.

00:21:39.250 --> 00:21:39.910
There we go.

00:21:41.750 --> 00:21:43.270
Oh, describe, let's see.

00:21:43.370 --> 00:21:44.650
Okay, yep, appa set.

00:21:45.250 --> 00:21:45.730
Okay.

00:21:46.870 --> 00:21:48.930
So sorry, my slides are stuck there.

00:21:49.510 --> 00:21:58.290
Okay, so we have three. We did it with three instead of two, but we're going to scale it from three to four replica.

00:21:58.290 --> 00:22:16.550
Yes. Okay. And we're going to verify the scale up was successful and two new pods were created. Actually, one new pod was created because we already put three. And it has one new pod hash. So the replica set hash has one new pod hash.

00:22:16.550 --> 00:22:20.870
So 48 seconds ago, we added one new pod and it has a hash of 4F.

00:22:21.430 --> 00:22:23.270
Was it ZQZ or something?

00:22:24.690 --> 00:22:26.390
Hard to read on there.

00:22:26.490 --> 00:22:26.710
Okay.

00:22:27.510 --> 00:22:30.430
Now we're going to scale it down.

00:22:30.570 --> 00:22:31.290
So very similar.

00:22:32.770 --> 00:22:34.010
We're going to take it from four to three.

00:22:34.750 --> 00:22:36.250
And guess which one it deleted?

00:22:37.010 --> 00:22:38.710
It deleted the most recent one.

00:22:40.750 --> 00:22:41.230
All right.

00:22:41.310 --> 00:22:44.590
So now we're going to delete the deployment and I do that all the time.

00:22:44.670 --> 00:22:46.170
I have a stuck B on my keyboard.

00:22:46.550 --> 00:22:50.540
typical.

00:22:50.540 --> 00:22:53.540
And we'll verify that that was successful.

00:22:53.540 --> 00:22:54.540
Not found, yeah.

00:22:54.540 --> 00:22:56.540
And same with the

00:22:56.540 --> 00:22:59.540
all right.

00:22:59.540 --> 00:23:02.540
Now we're going to go on to the staples sets.

00:23:02.540 --> 00:23:05.540
All right.

00:23:05.540 --> 00:23:09.540
Staple sets enable our pods to stick to a particular name

00:23:09.540 --> 00:23:12.540
instead of changing when they restart.

00:23:12.540 --> 00:23:16.540
And the deployment, the name will end with a hash

00:23:16.540 --> 00:23:17.540
that changes upon restart.

00:23:17.540 --> 00:23:21.540
restart. So if you were to have restarted those pods, you would have noticed they would

00:23:21.540 --> 00:23:28.740
have had a different hash name. In a deployment, the name will end in a hash that

00:23:28.740 --> 00:23:33.460
changes upon restart. In a stateful set, the name will end with a number that

00:23:33.460 --> 00:23:37.460
starts with a zero and increments from there. A stateful set of two pods will

00:23:37.460 --> 00:23:41.700
contain the same pod name, but each will end with a different number,

00:23:41.700 --> 00:23:47.380
such as stapleset x0 or stateful set dash 1. A stateful set guarantees

00:23:47.380 --> 00:24:17.360
the order and uniqueness of each pod. This is useful when using database applications where you need to access the same pod using an ID and a connection. When the pod restarts and the pod IP address changes, the name and endpoint stay the same for connection purposes. Unlike deleting a deployment, when you delete a stateful set, there is no guarantee that all of the pods will be deleted. The best practice is the scale of stateful set.

00:24:17.380 --> 00:24:25.700
set down to zero and then delete the stateful set. Stateful sets typically utilize a headless

00:24:25.700 --> 00:24:31.620
service. However, when deploying databases in a production cluster, you may find that a headless

00:24:31.620 --> 00:24:37.540
service may not serve your needs and a normal service is required. So there are certain connections

00:24:37.540 --> 00:24:47.220
to databases that won't work with a headless service. Stateful sets are typically provisioned with a

00:24:47.220 --> 00:24:55.380
persistent volume, although you will see an example later in this lesson. And sometimes upstream

00:24:55.380 --> 00:25:04.320
providers have errors in their headless service. And so when you go to connect using a headless

00:25:04.320 --> 00:25:11.660
service, it actually won't work. And so that has been encountered with different distributions

00:25:11.660 --> 00:25:17.200
of database containers that are out there. And then you

00:25:17.220 --> 00:25:18.980
you just have to wait for them to fix it.

00:25:18.980 --> 00:25:22.460
Yeah, so a headless service enables DNS to be used.

00:25:22.460 --> 00:25:24.700
And so we'll go into that a little bit later on

00:25:24.700 --> 00:25:28.460
when we get to DNS, yeah, and services.

00:25:28.460 --> 00:25:32.740
But yeah, so headless enables you to use a sticky connection.

00:25:32.740 --> 00:25:37.860
So for example, if you have a stateless workload,

00:25:37.860 --> 00:25:41.000
let's see you're running an API and it's connected

00:25:41.000 --> 00:25:45.200
to a Mongo database, it will use a library

00:25:45.200 --> 00:25:47.200
to figure out who the primary node is.

00:25:47.220 --> 00:25:49.800
and it will write to that primary node,

00:25:49.800 --> 00:25:53.040
but it will read from all of the nodes.

00:25:53.040 --> 00:25:57.040
And it does that through the use of a sticky connection

00:25:57.040 --> 00:26:01.400
and an actual pod name that no matter if that pod restarts

00:26:01.400 --> 00:26:04.160
or that container restarts, that name remains the same

00:26:04.160 --> 00:26:08.480
so the DNS can always hit that, that name.

00:26:08.480 --> 00:26:10.200
So use primarily for databases

00:26:10.200 --> 00:26:12.200
for a stateful application.

00:26:12.200 --> 00:26:15.220
I know one, one time for a stateful application and that's it.

00:26:15.220 --> 00:26:21.220
All right, we are going to create a staple set in YAMLFAR.

00:26:21.220 --> 00:27:06.060
Notice that the stable set created two pods ending in zero and one.

00:27:06.060 --> 00:27:12.660
So how does this differ from the prior engine X deployment with replicits?

00:27:12.660 --> 00:27:15.100
Correct.

00:27:15.100 --> 00:27:17.100
All right.

00:27:17.100 --> 00:27:21.100
We're going to scale the staple set now.

00:27:21.100 --> 00:27:27.100
So let's scale a staple set from two pods to one pod.

00:27:27.100 --> 00:27:34.400
So we have one of one, and then when we describe the stapleset, we can see that.

00:27:36.080 --> 00:27:37.580
What is the name of the remaining pod?

00:27:38.160 --> 00:27:41.520
Well, the stapleset deleted pod one, but kept pod zero.

00:27:42.420 --> 00:27:43.380
So it works backwards.

00:27:46.080 --> 00:27:48.020
All right, let's delete the stapleset.

00:27:48.140 --> 00:27:48.700
Correct.

00:27:49.520 --> 00:27:52.380
Yeah, and you can look at the pods right now before we scale.

00:27:52.920 --> 00:27:56.200
So we would have had also an IngenX staple set dash one.

00:27:58.370 --> 00:28:00.210
Yep, but we still have a stapleset.

00:28:00.390 --> 00:28:02.670
You notice that even though it has no pods.

00:28:02.750 --> 00:28:03.410
It's still there.

00:28:03.550 --> 00:28:05.190
Now we're going to delete this staple set.

00:28:07.190 --> 00:28:07.590
Correct.

00:28:07.970 --> 00:28:09.850
And that's because of their sticky nature.

00:28:10.310 --> 00:28:14.370
And if you try to delete it without scaling it first, it may leave pods.

00:28:14.670 --> 00:28:14.970
Right.

00:28:15.310 --> 00:28:19.870
The next one, it's 1232, and the next one will be Damon sets.

00:28:20.030 --> 00:28:22.310
So let's go ahead and take a break.

00:28:23.750 --> 00:28:26.550
And then lunch was 30 minutes.

00:28:26.750 --> 00:28:27.430
How's that work?

00:28:27.430 --> 00:28:29.210
And then we'll come back at,

00:28:30.390 --> 00:28:35.390
Let's see, Eastern Time will come back at 102.

00:28:35.390 --> 00:28:35.830
How's that?

00:28:35.830 --> 00:28:39.840
All right, see you then.

00:28:39.840 --> 00:28:45.320
I have a poor network connection today.

00:28:45.320 --> 00:28:47.160
Okay, good.

00:28:47.160 --> 00:28:56.030
Let's go ahead and get started with Damon sets.

00:28:56.030 --> 00:28:59.570
So how do you feel about stateful sets and deployments?

00:28:59.570 --> 00:29:05.350
Okay, unlike deployments, which manage stateless applications,

00:29:05.350 --> 00:29:09.410
or Stapful sets, which manage stateful applications,

00:29:09.410 --> 00:29:14.270
Damon sets are designed to run a pod on all

00:29:14.270 --> 00:29:16.430
or specific nodes in a cluster.

00:29:17.650 --> 00:29:21.130
This is useful in logging or monitoring application

00:29:21.130 --> 00:29:22.890
where you desire a node level service,

00:29:22.890 --> 00:29:25.010
such as a log collector on every node.

00:29:26.250 --> 00:29:30.810
This is used by container networking interfaces like CILUM

00:29:30.810 --> 00:29:33.930
to install a CILium agent pod to every node.

00:29:33.930 --> 00:29:35.330
Additionally, when a new node,

00:29:35.350 --> 00:29:39.190
is created, a daemon set will automatically spin up a pod

00:29:39.190 --> 00:29:41.510
on that new node.

00:29:41.510 --> 00:29:47.110
You can set a daemon set to only run on specific nodes

00:29:47.110 --> 00:29:48.710
using the node selector spec.

00:29:48.710 --> 00:29:56.180
Okay, we're going to ensure a fresh mini-cube cluster.

00:29:56.180 --> 00:29:58.460
Correct.

00:29:58.460 --> 00:30:00.340
Okay, so we're actually going to do,

00:30:00.340 --> 00:30:02.820
go ahead and take that down, sorry about that.

00:30:02.820 --> 00:30:05.900
We're going to create a multi-node cluster

00:30:05.900 --> 00:30:07.460
to demonstrate a daemon set.

00:30:07.460 --> 00:30:09.580
That's a little slow on the slides.

00:30:10.660 --> 00:30:14.180
And I believe you have to delete it every time, so unfortunately.

00:30:15.860 --> 00:30:18.740
So they're technically worker nodes, but they don't have a role name.

00:30:19.700 --> 00:30:21.940
So you're just empty nodes.

00:30:23.700 --> 00:30:27.060
So this is not an H.A cluster. It's just a single control plane.

00:30:29.540 --> 00:30:32.500
Let's take a look at the pods. All right. What do you notice it's different?

00:30:32.500 --> 00:30:34.500
I think that's their base, the NI.

00:30:34.500 --> 00:30:39.660
So, MiniCube, I believe that's their base, Z and I that they install.

00:30:40.860 --> 00:30:41.880
Yes, likely.

00:30:42.360 --> 00:30:47.040
So now we're going to deploy a daemon set.

00:30:49.380 --> 00:30:52.340
This slide is misnamed, but this is a daemon set.

00:30:53.620 --> 00:30:56.200
And it's missing the dot yamil.

00:30:58.320 --> 00:30:59.420
Let me see here.

00:31:01.790 --> 00:31:05.910
We're going to do fluent d.damson.com.

00:31:05.910 --> 00:31:07.470
I can't edit the slides.

00:31:07.610 --> 00:31:10.590
on what you see because it doesn't allow me to change anything,

00:31:10.590 --> 00:31:12.450
but I did change it on my other set.

00:31:12.450 --> 00:31:20.170
So I was gonna do an example with Loki for you

00:31:20.170 --> 00:31:27.490
or Prometheus, but it was too involved for a mini-cube cluster.

00:31:27.490 --> 00:31:29.810
So try to keep my examples fairly simple

00:31:29.810 --> 00:31:33.110
so you understand the concept instead of getting caught up

00:31:33.110 --> 00:31:39.110
early on and fixing manifest files.

00:31:39.110 --> 00:31:41.970
A lot of things actually don't work with a mini-cube cluster,

00:31:41.990 --> 00:31:54.810
What has to do with the way that they have a minimal resource because it's set up to basically test concepts on high availability nodes and multi-nodes.

00:31:57.110 --> 00:32:03.410
And so they pared down the feature set and they also do some Docker magic.

00:32:05.510 --> 00:32:11.130
So that keeps certain containers from running correctly.

00:32:11.130 --> 00:32:17.490
And you can put the hours in and engineer it, but I wouldn't recommend doing that for a mini-cube cluster.

00:32:17.610 --> 00:32:19.990
We don't use FluentD too much in Kubernetes anymore.

00:32:19.990 --> 00:32:25.710
It's a cloud tool, not a Kubernetes native tool.

00:32:27.510 --> 00:32:34.190
And Kubernetes native, we would use Loki and Promptail and Node Explorer and Grafana.

00:32:34.370 --> 00:32:41.110
Yeah, and then for your logs, Loki and Promtail are the Kubernetes native.

00:32:41.130 --> 00:32:45.690
native log shippers now if you don't want to pay for a solution.

00:32:45.690 --> 00:32:47.730
So open source.

00:32:47.730 --> 00:32:48.890
But that was too involved.

00:32:48.890 --> 00:32:53.370
I have to set up a minio and that's not going to work.

00:32:53.370 --> 00:32:58.810
So that was that was too involved for just this demonstration.

00:32:58.810 --> 00:33:04.810
Oh, sorry, there you go.

00:33:04.810 --> 00:33:08.810
Yes, if you control five minus F, then it's not ready yet.

00:33:08.810 --> 00:33:09.810
What do you think that might be?

00:33:09.810 --> 00:33:12.810
So we have three nodes, right?

00:33:12.810 --> 00:33:16.810
So each node has to download its own image.

00:33:16.810 --> 00:33:18.810
Okay, let's check the pods.

00:33:18.810 --> 00:33:22.810
Just make sure that the pods are downloading.

00:33:22.810 --> 00:33:26.810
All right, we might have an error in our YAML file.

00:33:26.810 --> 00:33:28.810
Let's pull up the YAML file again.

00:33:28.810 --> 00:33:32.810
Let's read, actually let's, let's get image, name,

00:33:32.810 --> 00:33:36.810
made as elastic, search query.

00:33:36.810 --> 00:33:38.810
That I-O.

00:33:38.810 --> 00:33:42.810
Slash FluentD underscore Elastic.

00:33:42.810 --> 00:33:44.810
So it's 4 slash Fluent.

00:33:44.810 --> 00:33:46.810
Oh, you forgot a D.

00:33:46.810 --> 00:33:48.810
So at the end of Fluent, there's a D missing.

00:33:48.810 --> 00:33:50.810
It's also in the name.

00:33:50.810 --> 00:33:52.810
The name should say Fluent D as well.

00:33:52.810 --> 00:33:53.810
Up at the top.

00:33:53.810 --> 00:33:54.810
All right.

00:33:54.810 --> 00:33:55.810
Save it and run it again.

00:33:55.810 --> 00:33:57.810
Actually delete it first.

00:33:57.810 --> 00:33:58.810
Well, delete it.

00:33:58.810 --> 00:34:01.810
Well, I think it's just called Fluent D.

00:34:01.810 --> 00:34:02.810
U.C.

00:34:02.810 --> 00:34:03.810
minus F.

00:34:03.810 --> 00:34:04.810
Oh, fluent.

00:34:04.810 --> 00:34:05.810
There you go.

00:34:05.810 --> 00:34:11.810
there we go now we can apply it all right now it's one of those checks again all right and

00:34:11.810 --> 00:34:18.770
you can go ahead and describe the daemon set the desired number of nodes three current

00:34:18.770 --> 00:34:26.450
number of nodes three so everything looks good on that you can see where it created the

00:34:26.450 --> 00:34:33.810
three pods each one has a hash so that it can track each pod individual link we're going to

00:34:33.810 --> 00:34:43.250
delete the daemon set and we're going to clean up the minicube profile all right in lesson three we learned

00:34:43.250 --> 00:34:51.090
the difference between deployments stateful sets and damon sets how deployments create replica sets

00:34:51.090 --> 00:34:59.090
and out of scale replica sets how stateful sets create unique pods with sticky names and how to scale

00:34:59.090 --> 00:35:04.290
stateful sets, and how Damon sets can be used for cluster-wide applications.

00:35:04.470 --> 00:35:09.450
Pods may take a few seconds longer to deploy in clusters due to downloading multiple images,

00:35:09.870 --> 00:35:10.870
one for each node.