4 videos 📅 2022-11-21 09:00:00 Africa/Blantyre
1:04:49
2022-11-21 11:28:21
44:16
2022-11-21 13:36:53
44:08
2022-11-21 14:41:53
10:27
2022-11-21 15:56:25

Visit the Certified Kubernetes Administrator - Exam Preparation course recordings page

United Arab Emirates - Certified Kubernetes Administrator (CKA) - exam preparation

                WEBVTT

00:00:01.560 --> 00:00:08.700
If you are disconnected from the training room, please feel free to join back.

00:00:16.680 --> 00:00:28.400
Okay, so it can be because in the volume we are using an NFS referring NFS server,

00:00:28.400 --> 00:00:33.900
but this NFS server doesn't exist, so it's an valid error.

00:00:34.020 --> 00:00:40.120
It tries to mount the volume, but then it's unable to reach the NFS server, right?

00:00:40.620 --> 00:00:45.060
So I would recommend you to replace this with host path.

00:00:45.560 --> 00:00:51.500
Maybe if you go to 1.5 pot volume ML, replace the NFS with the host path

00:00:51.500 --> 00:00:55.900
because it's available anyways in the host machine.

00:00:57.440 --> 00:00:58.840
Welcome.

00:01:29.180 --> 00:01:36.380
Jury, what you are trying to test is a valid thing, but the host is not your Ubuntu machine.

00:01:36.520 --> 00:01:38.460
The host is the Minikube machine.

00:01:39.220 --> 00:01:42.360
So you need to SSH to the Minikube.

00:01:42.760 --> 00:01:44.380
Can you type Minikube?

00:01:44.540 --> 00:01:46.820
Yeah, Minikube SSH and you will see that.

00:01:47.220 --> 00:01:48.480
Okay, okay, perfect.

00:02:19.600 --> 00:02:20.560
Minikube SSH.

00:02:20.580 --> 00:02:21.660
Welcome to come out.

00:02:21.660 --> 00:02:23.660
You can just type logout, that will do.

00:02:36.320 --> 00:02:39.360
Last four minutes to complete this activity.

00:02:54.360 --> 00:02:55.480
Okay, sure.

00:02:55.620 --> 00:02:56.940
So port forward.

00:02:57.380 --> 00:03:01.740
Okay, the port number within this container is 8080.

00:03:01.840 --> 00:03:07.520
We used 80 for Nginx, but for this custom application, it is exposed in port number 8080.

00:03:07.640 --> 00:03:09.120
8080 colon 8080.

00:03:09.300 --> 00:03:11.380
I mean, you specified it as 80, right?

00:03:11.380 --> 00:03:11.740
80.

00:03:13.160 --> 00:03:14.560
So this part.

00:03:16.380 --> 00:03:17.520
You see my screen, right?

00:03:17.720 --> 00:03:20.660
Okay, so let me explain it here.

00:03:21.560 --> 00:03:24.640
You specify it as an 80 colon 80.

00:03:24.640 --> 00:03:26.640
So this is the port inside the container.

00:03:26.700 --> 00:03:29.660
But for our custom application, it is exposed in 8080.

00:03:31.220 --> 00:03:32.620
So if you correct it, then it will do.

00:03:32.740 --> 00:03:41.560
No, no, it can be the same way in the card application that web application is accessible in the port 8080.

00:03:42.760 --> 00:03:47.300
But in case of Nginx, it was accessible in port 80.

00:03:47.300 --> 00:03:50.900
So we used 80 in the Nginx, but for the card, it is 8080.

00:03:51.780 --> 00:03:55.140
So this can be any port in the host machine, available port.

00:03:55.140 --> 00:03:57.740
This can be 8081, 8082, 8080.

00:03:58.120 --> 00:03:59.000
That doesn't matter.

00:03:59.400 --> 00:04:07.400
But the one on the right side is the port number within the container where your application is exposed.

00:04:10.920 --> 00:04:14.500
Okay, kubectl getbots, and it is still in the terminating state.

00:04:14.740 --> 00:04:15.520
Can it do again?

00:04:15.520 --> 00:04:16.820
Getbot, getbots.

00:04:16.900 --> 00:04:18.180
Okay, it's already deleted.

00:04:20.840 --> 00:04:21.280
Welcome.

00:04:21.620 --> 00:04:22.640
Good job, Hermo.

00:04:22.760 --> 00:04:28.560
Okay, guys, I need to stop you guys here because it's already time to move to the next topic.

00:04:28.680 --> 00:04:30.720
And I hope you are clear with the concepts.

00:04:30.760 --> 00:04:31.920
You can try this later.

00:04:32.180 --> 00:04:36.500
Also, let's proceed to the next kubernetes resource.

00:04:37.140 --> 00:04:46.040
Okay, so in our examples, we created multiple parts, the card part and then the Nginx part.

00:04:46.480 --> 00:04:47.700
We created it all.

00:04:48.160 --> 00:04:54.000
So if I go back here and if I do kubectl getbots, there are a couple of parts that are running.

00:04:54.540 --> 00:04:55.840
I can delete the part.

00:04:58.080 --> 00:05:13.000
kubectl delete part, card, my part, our part, all that's going to delete the part.

00:05:13.460 --> 00:05:14.480
So what happens?

00:05:15.640 --> 00:05:17.800
I don't have a part with the name my part.

00:05:18.160 --> 00:05:20.620
kubectl delete part, my part.

00:05:21.120 --> 00:05:22.540
Okay, it's already deleted maybe.

00:05:34.760 --> 00:05:37.340
kubectl delete part.

00:05:42.840 --> 00:05:44.480
Come on.

00:05:48.040 --> 00:05:49.300
Nothing happened.

00:05:49.320 --> 00:05:51.320
We deleted a part and then it deleted.

00:05:51.560 --> 00:05:51.980
Correct.

00:05:51.980 --> 00:05:56.260
So there is no part that is running right now in Luis namespace.

00:05:56.760 --> 00:06:05.980
So now instead of creating a part, I'm going to create an top level resource, one another kubernetes resource.

00:06:07.900 --> 00:06:17.480
I'm going to introduce a new kubernetes resources and those new resources are going to bring in some new capabilities or features to the table.

00:06:18.380 --> 00:06:24.880
Okay, for example, we know we use kubectl run command to create a part.

00:06:25.060 --> 00:06:29.480
This time I'm going to create one top level resource called deployment.

00:06:31.960 --> 00:06:47.080
kubectl create deployment with the image engine next and even you can specify how many replicas you want to run.

00:06:47.080 --> 00:06:59.220
Let's say I want five replicas or five instances of engine next part to create for a deployment with the name my deploy.

00:06:59.620 --> 00:07:02.520
Let's say I'm going to call this deployment as in my deploy.

00:07:03.540 --> 00:07:09.840
Okay, so if I press enter, this is going to create a part and some top level resources.

00:07:09.840 --> 00:07:16.320
As usual, let's go with the dry run approach.

00:07:16.500 --> 00:07:23.820
Let's create create this with yaml file dry run as a client or put as an animal.

00:07:25.300 --> 00:07:35.460
And here you can see now the kind says deployment and some metadata like names and labels and we have a specification.

00:07:36.040 --> 00:07:37.760
So here you can see two things.

00:07:37.760 --> 00:07:43.500
There is a spec under that we have another spec under the template.

00:07:43.500 --> 00:07:44.360
We have the spec.

00:07:44.420 --> 00:07:55.580
So this is the specification that corresponds to deployment resource and this specification corresponds to the part specification the deployment at the end.

00:07:55.580 --> 00:07:57.120
It's going to create the parts.

00:07:57.920 --> 00:07:58.760
This is the part specification.

00:07:58.920 --> 00:08:08.680
Basically, this is the place where you are going to put all the container specification with resource request limits liveness probe readiness probe volume modes.

00:08:08.680 --> 00:08:10.080
Everything will go here apart.

00:08:10.400 --> 00:08:14.820
Specification is encompassed by an deployment specification.

00:08:15.620 --> 00:08:19.400
Okay, so now I'm going to apply this file.

00:08:25.160 --> 00:08:26.160
My deploy.

00:08:28.480 --> 00:08:30.660
Okay, I didn't wrote it to and file it.

00:08:30.740 --> 00:08:32.820
I'm going to write to the file.

00:08:40.240 --> 00:08:41.040
Come on.

00:08:41.220 --> 00:08:44.060
My keyboard is having some split second delay.

00:08:44.060 --> 00:08:47.060
My deployment.aml.

00:08:48.020 --> 00:08:49.100
There you go.

00:08:49.840 --> 00:08:51.900
And now I'm going to apply this one.

00:08:52.100 --> 00:08:54.100
QCTL apply FNF.

00:08:54.800 --> 00:08:56.580
My deployment.aml.

00:08:57.120 --> 00:09:01.000
And it says deployment.apps.

00:09:01.020 --> 00:09:02.160
My deploy created.

00:09:03.340 --> 00:09:06.120
So if I do QCTL get parts.

00:09:08.340 --> 00:09:10.960
Because I specified replicas as a file.

00:09:11.680 --> 00:09:15.860
I can see some five parts are created by this deployment.

00:09:17.420 --> 00:09:28.620
The name of the part starts with my deploy my deploy my deploy so I can easily recognize that these five parts are now created by an deployment top level resource.

00:09:29.880 --> 00:09:34.520
Okay, so now what I'm going to do is I'm going to pick one part.

00:09:34.520 --> 00:09:40.960
Let's say I'm going to take the part that ends with XXLBP.

00:09:41.200 --> 00:09:42.160
Take this part.

00:09:43.200 --> 00:09:46.840
And I'm going to try deleting that part.

00:09:48.160 --> 00:09:51.940
QCTL delete the part.

00:09:53.540 --> 00:09:54.480
With this name.

00:09:55.160 --> 00:09:57.440
The usual delete that we do.

00:09:58.480 --> 00:10:00.160
And it says deleted.

00:10:00.400 --> 00:10:03.580
So which means out of five we deleted one.

00:10:03.580 --> 00:10:07.320
So we expect four parts.

00:10:09.920 --> 00:10:20.040
But we got five and there is one that created 11 seconds ago, which means we deleted this part.

00:10:20.240 --> 00:10:22.180
It's actually deleted.

00:10:23.000 --> 00:10:27.520
But the next moment and replacement is created immediately.

00:10:31.260 --> 00:10:39.520
No matter how many times you are going to delete this part and replacement will get created again and again and again.

00:10:40.940 --> 00:10:45.620
But that is not the case when we deleted nginx my part or card part.

00:10:46.660 --> 00:10:52.900
Because what we created earlier is an standalone part.

00:10:54.840 --> 00:10:59.060
Nobody's there on the top to manage this case if we delete nothing happens.

00:10:59.060 --> 00:11:05.540
But what we created now is an deployment first.

00:11:05.540 --> 00:11:09.900
We created a deployment and this deployment internally.

00:11:10.620 --> 00:11:17.340
It created one replica set and this replica set is the one that created those five parts.

00:11:17.360 --> 00:11:20.120
So we specify the replicas as an five here.

00:11:20.120 --> 00:11:28.300
This information is cascaded down to replica set as a five and this replica set is the one that created this part.

00:11:28.300 --> 00:11:43.780
So the moment when I delete a part, this is the one that creates an replacement part because the only duty of replica set is to always ensure current state matches to the desired state.

00:11:44.220 --> 00:11:46.680
Current state matches to the desired state.

00:11:46.680 --> 00:11:52.640
The desired state is five, which is what is submitted by the developer or the administrator who created this deployment.

00:11:54.300 --> 00:11:58.920
The moment when I delete then the current state became four.

00:11:59.300 --> 00:12:00.720
The desired state is five.

00:12:00.840 --> 00:12:01.680
So there is a mismatch.

00:12:02.780 --> 00:12:08.320
So replica set spin up an additional one part to bring it back to the desired state.

00:12:08.320 --> 00:12:15.000
So basically this guy is the one who is who is tearing it towards the desired state always.

00:12:15.000 --> 00:12:20.000
Okay, because this replica set is run.

00:12:20.180 --> 00:12:24.420
This replica set is running an infinite.

00:12:25.280 --> 00:12:31.440
Reconciliation loop logic always make sure current state matches to the desired state.

00:12:31.780 --> 00:12:32.920
That's its only duty.

00:12:32.920 --> 00:12:33.360
Okay.

00:12:33.640 --> 00:12:41.040
So if I go back here and if you get will also delete that if you have desired state letter.

00:12:41.040 --> 00:12:42.000
Yeah, yeah.

00:12:42.300 --> 00:12:55.460
If you have five and you have one more and the desired set is five, then the next moment when replica set performs reconciliation, they said is five.

00:12:55.580 --> 00:12:56.840
Current state is six.

00:12:57.140 --> 00:13:02.060
So it will delete randomly pick one and delete it to bring back to the.

00:13:03.060 --> 00:13:08.940
So if you look at here, Cube CTL get deploy.

00:13:09.820 --> 00:13:16.940
So this is the deployment we created and this deployment created and replica set.

00:13:20.540 --> 00:13:31.640
This replica set is created by this deployment where it cascaded that configuration here and this replica set is the one that created those five parts.

00:13:31.640 --> 00:13:34.880
You can see the name starts with those replica sets.

00:13:40.340 --> 00:13:40.940
Okay.

00:13:41.440 --> 00:13:47.920
So which means if I want to scale up or scale down, I can there are a couple of ways.

00:13:48.020 --> 00:13:58.580
One, you can imperatively use the Cube CTL scale command deploy my deploy and then I can even you can specify the replicas.

00:13:58.580 --> 00:14:02.600
Let's say I want to scale down from five to three.

00:14:03.420 --> 00:14:07.320
And if I do Cube CTL get parts, I'll read two parts will be terminated.

00:14:07.560 --> 00:14:08.520
I have three.

00:14:08.940 --> 00:14:11.140
If I do more count, then it's a scale up.

00:14:11.320 --> 00:14:13.260
If I do less count, it's going to be a scale down.

00:14:13.460 --> 00:14:13.660
Right.

00:14:14.020 --> 00:14:15.200
This is imperative scaling.

00:14:16.020 --> 00:14:22.720
But you need to update the YAML file and then you need to apply it.

00:14:22.880 --> 00:14:25.860
You need to update the replicas properties in the YAML file.

00:14:26.720 --> 00:14:28.400
And then it apply.

00:14:29.380 --> 00:14:33.060
That is also one way to increase or decrease the scale count.

00:14:33.880 --> 00:14:34.560
Okay.

00:14:35.160 --> 00:14:40.480
So this scale command is also applicable for replica set.

00:14:40.740 --> 00:14:41.480
You can also scale.

00:14:44.200 --> 00:14:44.880
Okay.

00:14:44.880 --> 00:14:55.960
For example, Cube CTL scale RS scale command is applicable for replica set and then the deployment.

00:14:56.280 --> 00:15:03.080
But scale is not applicable for the part, which means you can't do something like Cube CTL scale part.

00:15:04.480 --> 00:15:07.720
That's meaningless because these are top level resources.

00:15:07.720 --> 00:15:11.760
And if we scale it, it's actually scales the part, isn't it?

00:15:11.760 --> 00:15:13.860
So I'm going to scale the replica set.

00:15:15.700 --> 00:15:21.620
And I'm going to specify the replicas as in five right now.

00:15:22.820 --> 00:15:28.100
That replica is holding three three as in state because we already scale the deployment.

00:15:28.100 --> 00:15:31.080
If I do a misspell the replicas.

00:15:36.020 --> 00:15:39.080
And it says RS deploy.

00:15:41.640 --> 00:15:46.560
This is I think I pasted a part name here.

00:15:52.240 --> 00:15:54.460
This is the name of the replica set.

00:15:54.460 --> 00:15:56.460
So it says scaled.

00:15:57.760 --> 00:16:02.460
So which means if I do Cube CTL get RS, I expected to have.

00:16:03.800 --> 00:16:08.080
It was three earlier, so I'm going to scale it to five.

00:16:09.720 --> 00:16:13.160
And it says scale if I do get RS.

00:16:14.540 --> 00:16:19.220
I gave five and it says scale, but then it still says it the three.

00:16:20.960 --> 00:16:23.220
And if you look at the part.

00:16:24.660 --> 00:16:25.720
There are only three parts.

00:16:27.640 --> 00:16:35.700
The reason behind that is yes, you can scale replica set, but that replica set is not and standalone replica set.

00:16:37.360 --> 00:16:40.320
This replica set is now managed by a top level resource.

00:16:40.560 --> 00:16:46.640
So even it accepted, you cut this hierarchy and you try to scale this the next moment.

00:16:46.640 --> 00:16:50.580
The deployment overridden that hey, I am the top level.

00:16:50.700 --> 00:16:52.300
I am the one who created you.

00:16:52.460 --> 00:16:56.740
So you should comply to my desired state.

00:16:56.900 --> 00:16:58.460
So that is three.

00:16:58.760 --> 00:17:03.160
So even though it accepted five immediately overridden as in three by the top level resource.

00:17:05.460 --> 00:17:07.220
Okay, so that is number one.

00:17:07.400 --> 00:17:10.660
So which means you need to scale it only at the top level.

00:17:10.660 --> 00:17:13.480
And if you try to delete the part, it's going to create again and again.

00:17:13.480 --> 00:17:16.880
Similarly, if you try to delete the replica set, it will get created again.

00:17:17.760 --> 00:17:20.980
So the only way to delete is deleting the deployment.

00:17:21.100 --> 00:17:23.680
If I do cube CTL.

00:17:23.940 --> 00:17:29.080
Delete there are two ways to delete one is specify the name of the deployment like this.

00:17:29.460 --> 00:17:35.420
Or if you have the email file, you can also specify the name of the file like this.

00:17:36.100 --> 00:17:40.000
As like how you used for deploy, you can use the email file.

00:17:40.120 --> 00:17:42.720
So it will read the email file match with the current state.

00:17:42.740 --> 00:17:45.820
It will find the object and then delete that resource.

00:17:47.160 --> 00:17:51.220
Okay, so I'm going to delete this deployment.

00:17:53.160 --> 00:17:56.620
If I press enter, this is going to delete the deployment.

00:17:57.820 --> 00:18:01.960
The resource replica set it created because replica set is getting deleted.

00:18:01.960 --> 00:18:04.000
It will also delete all those three parts.

00:18:04.000 --> 00:18:06.540
So basically the whole hierarchy will be deleted.

00:18:08.260 --> 00:18:18.920
Okay, but then just for demonstration sake, I'm going to delete with cascade flag as and false.

00:18:23.200 --> 00:18:29.940
So what it means is deployment replica set parts.

00:18:30.960 --> 00:18:38.420
I'm just deleting only this top level resource, which means I don't want the delete to cascade down.

00:18:38.560 --> 00:18:40.120
So I set the cascade to false.

00:18:40.120 --> 00:18:45.580
So one is a top level resource will be deleted, which means the replica set and parts will be still there.

00:18:45.580 --> 00:18:59.360
So if I do kubectl get deploy, kubectl get deploy, there is no deployment.

00:18:59.620 --> 00:19:08.920
But if I do kubectl get RS, it's still running with the status and three and those three parts will also be running.

00:19:08.920 --> 00:19:25.460
So at this point, if I scale the replica set, it will work perfectly fine because this replica set is now a standalone replica set.

00:19:26.620 --> 00:19:32.120
There is nobody at the top of this guy to override it.

00:19:32.120 --> 00:19:34.180
You can see it's already scaled up.

00:19:34.180 --> 00:19:41.320
So if I do kubectl get parts, I will already see five parts managed by this replica set.

00:19:44.260 --> 00:19:52.320
Okay, so same way, if you delete a replica set, it's going to delete the replica set as well as all the five parts.

00:19:54.200 --> 00:20:04.120
But just for the demo sake, I'm going to delete the replica set with the cascade flag as and false.

00:20:19.180 --> 00:20:23.940
Okay, so kubectl get RS.

00:20:27.440 --> 00:20:37.740
kubectl delete RS, my deploy seven seven D five seven four F four D eight.

00:20:38.060 --> 00:20:41.100
My bad, I missed the full name of the replica set.

00:20:42.120 --> 00:20:45.140
Okay, so if I do kubectl get RS.

00:20:46.140 --> 00:20:55.520
There is no replica set running, but if I do kubectl get parts, because I specified cascade as and false.

00:20:57.460 --> 00:21:03.380
That deleted only that top level replica set resource, which means I have all those five parts.

00:21:05.000 --> 00:21:12.320
All these five parts are now orphan or in other terms, it all already and standalone part.

00:21:12.320 --> 00:21:18.920
There is no top level resource that are managing this, which means if I delete this, it will simply get deleted.

00:21:19.240 --> 00:21:22.360
Nobody is there at the top to create a replacement or whatsoever.

00:21:33.220 --> 00:21:37.640
Okay, so four is five minutes on four, right?

00:21:38.080 --> 00:21:46.360
The reason why I explained this with cascade option is just to provide a better understanding on this hierarchy.

00:21:46.360 --> 00:21:50.800
The deployment create replica set replica set create parts.

00:21:51.820 --> 00:21:55.440
So which means if if victory.

00:21:57.220 --> 00:22:03.320
If she have an application to deploy in the kubernetes, she can deploy test and part.

00:22:04.220 --> 00:22:09.060
She can deploy test and replica set or she can deploy test and deployment.

00:22:10.900 --> 00:22:21.060
If she deploy test and part, then this is more of like an pet that she need to take care because if it goes down, she need to manually bring it up.

00:22:22.360 --> 00:22:28.160
Right. Everything she need to do manually when it comes to maintaining the state and all those things.

00:22:29.540 --> 00:22:36.720
But still she can use all the liveness for readiness for the source request limits, because those are at the part level replica set.

00:22:36.720 --> 00:22:42.100
She will go for if if she want to maintain a specific count.

00:22:42.100 --> 00:22:46.960
Let's say all the time she want to make sure five instances of the application is running.

00:22:47.760 --> 00:22:52.960
Then she will create an she can create an standalone replica set with kindness and replica set.

00:22:54.360 --> 00:22:56.500
And she can specify the replicas as in five.

00:22:56.740 --> 00:22:59.420
So this will do that CS equal DS job.

00:22:59.420 --> 00:23:05.760
It will make sure all the time by such as of the parts are running in the cluster across the cluster.

00:23:06.720 --> 00:23:22.340
Deployment I will go for if you are looking for zero downtime deployment, which means you want to roll out new version of the application during peak business hour with zero downtime.

00:23:22.800 --> 00:23:31.500
Then at the deployment resource level, we can specify something called strategy with which we can make use of this feature.

00:23:32.740 --> 00:23:34.620
For Victor is application.

00:23:35.280 --> 00:23:38.820
She is performing multiple deployments in a day.

00:23:39.700 --> 00:23:41.840
And she's really need these features.

00:23:42.420 --> 00:23:43.300
Something goes wrong.

00:23:43.300 --> 00:23:48.640
She want immediately undo that undo that version deployment for features like this.

00:23:48.740 --> 00:23:54.100
If she's looking for features like this, then she should deploy the application as an deployment.

00:23:55.500 --> 00:24:02.680
But let's say this application changes rarely, which means yearly once or two years once you will do some code changes.

00:24:04.000 --> 00:24:06.400
In those cases, I don't want this undo feature.

00:24:06.420 --> 00:24:08.280
I don't want this zero downtime feature.

00:24:09.020 --> 00:24:10.380
I don't need any of this.

00:24:10.540 --> 00:24:13.920
All I need is as long as I have five running, then it is OK.

00:24:14.080 --> 00:24:16.600
Then she can deploy doesn't replica set.

00:24:16.820 --> 00:24:18.980
But all victory has is just one application.

00:24:18.980 --> 00:24:23.900
Just one instance will run in the cluster and she will take care of it on her own.

00:24:24.280 --> 00:24:25.660
Then she can run it as part.

00:24:25.980 --> 00:24:27.180
So I got the idea right.

00:24:27.340 --> 00:24:28.740
So you are going to run the container.

00:24:28.840 --> 00:24:37.280
But what capabilities and features that you want based on that you are going to deploy it as as an as one of the top level resource.

00:24:38.760 --> 00:24:39.440
OK.

00:24:41.140 --> 00:24:55.580
So which means going back to the middle for every microservice M1 M2 M3 you are going to create a deployment that's going to create and parts.

00:24:56.440 --> 00:24:57.840
Let's say the replicas is five.

00:24:58.180 --> 00:25:00.380
Then it's going to create five parts for M2.

00:25:00.380 --> 00:25:04.520
You will create a deployment that's going to create its own replica set.

00:25:04.520 --> 00:25:12.120
Maybe here the replicas you set it to and for M3 you will create a deployment that's going to create the parts.

00:25:12.360 --> 00:25:15.460
Let's say here you have three three parts will be run.

00:25:15.460 --> 00:25:23.120
OK. So for every microservices you are going to deploy as and deployments and the parts will be running.

00:25:23.740 --> 00:25:29.520
So now we are clear with the part already dive into it.

00:25:30.320 --> 00:25:32.340
We are also clear with replica set.

00:25:32.340 --> 00:25:35.620
It's only duty is CS equals DS job.

00:25:35.660 --> 00:25:37.240
Nothing more than that.

00:25:38.560 --> 00:25:40.200
And the deployment is there.

00:25:41.240 --> 00:25:43.120
But we are yet to deep dive into it.

00:25:43.420 --> 00:25:48.740
Why it is that the features like strategy rolling update those things we are yet to discuss.

00:25:48.880 --> 00:25:54.340
But it's there to provide some additional capabilities or features for zero downtime deployment and those stuff.

00:25:55.480 --> 00:25:55.920
Fine.

00:25:55.920 --> 00:26:02.120
So if I delete this part then this replica set is taking an immediate action to create a replacement.

00:26:02.700 --> 00:26:07.060
If I delete this only this replica set takes an action.

00:26:07.620 --> 00:26:07.840
Correct.

00:26:08.240 --> 00:26:13.880
So how this top level resources identifies their identifies their child resources.

00:26:15.260 --> 00:26:24.740
How this top level resources like how deployment identifies its replica set and its part how replica set identifies its part.

00:26:29.540 --> 00:26:30.260
Yeah.

00:26:31.640 --> 00:26:33.220
Yes. Yes.

00:26:33.220 --> 00:26:35.860
You can do you can quarantine apart.

00:26:36.160 --> 00:26:41.380
You can adapt apart so that I will explain while we talk about labels and label selectors.

00:26:41.380 --> 00:26:42.920
Yes that is very much possible.

00:26:43.420 --> 00:26:49.740
So how this top level resources identifies the child resources.

00:26:49.980 --> 00:27:00.380
Is it by the name because if you look at the name the name of the part starts with the name of the replicas it name of the replicas it starts with the name of the deployment.

00:27:00.940 --> 00:27:07.820
That's the convention that it uses but that is not the logic that it uses to identify the child resources.

00:27:08.880 --> 00:27:27.940
How it works is or how this hierarchy works or how any hierarchy in the kubernetes works is by one simple resource called labels labels is also in kubernetes resource one of the simple resource.

00:27:28.320 --> 00:27:35.360
But an powerful one in the kubernetes for simple and powerful.

00:27:36.600 --> 00:27:40.980
Okay, let me explain that how it is simple and powerful right?

00:27:41.720 --> 00:27:43.520
Let's take the same deployment scenario.

00:27:43.520 --> 00:28:00.360
You created a deployment with the name my deploy correct because while creating the deployment you didn't specified any labels explicitly by default kubernetes will assign a label to this resource.

00:28:01.590 --> 00:28:14.870
Key equals value app equals it will assign a label like this app is the key my deploy is the value and then this deployment is creating an replica set after the replica set is created.

00:28:15.270 --> 00:28:26.630
It will assign this label to the replica set in addition to this label kubernetes will also add a label called pod hash template.

00:28:27.410 --> 00:28:31.470
This replica set is created right?

00:28:31.490 --> 00:28:39.330
It will get an unique ID a long random hash like a 1 4 b c k something like this.

00:28:39.330 --> 00:28:47.050
So it will also put that as an label here pod hash template as a key and then the that long random hash as a value.

00:28:47.670 --> 00:28:54.270
So basically the replica set will have two labels and this replica set created those five parts.

00:28:54.270 --> 00:28:58.030
So in all those five parts, you will see these two labels.

00:28:58.130 --> 00:29:07.810
Okay, so basically the step one is assigning the labels labels are assigned the way how it is assigned.

00:29:07.810 --> 00:29:14.330
I told already the it put a label and this replica set created these spots and then it assigned those label on these resources.

00:29:14.910 --> 00:29:18.570
You can assign label to any kubernetes resources.

00:29:18.570 --> 00:29:23.910
If I go back here, let's say if you want to view the labels.

00:29:24.410 --> 00:29:28.790
Cube CTL get you can describe it.

00:29:28.790 --> 00:29:39.630
That is one way and you can see the labels or you can simply include this flag show labels that will show an additional column where we can quickly see the labels.

00:29:39.630 --> 00:29:43.250
So for example, the previously created deployment.

00:29:44.810 --> 00:29:54.410
It has four parts and it has all those two labels on it the label that came all the way from the deployment and then the label on the replica set.

00:29:54.670 --> 00:29:57.150
Basically these two labels are there on the replica set.

00:29:57.970 --> 00:30:07.030
Okay, so you can assign label to any resources part deployment replica set node any kubernetes resource.

00:30:07.030 --> 00:30:13.350
You can assign a label. For example, I can assign a label for this part.

00:30:13.630 --> 00:30:18.090
Cube CTL label the part.

00:30:19.030 --> 00:30:20.290
With the name.

00:30:23.090 --> 00:30:27.130
And key equals value. Let's say ENV equals product.

00:30:30.370 --> 00:30:37.130
And if I do get parts which show labels you can see that label is assigned here.

00:30:38.650 --> 00:30:44.390
Okay, if I want to delete a label then the same key minus.

00:30:45.650 --> 00:30:48.310
This is going to delete that entire label.

00:30:51.550 --> 00:30:55.630
As you can see, I no longer have that label ENV equal to product.

00:30:55.730 --> 00:31:04.310
So as simple as that right so going back here, that's how the deployments are assigning the labels to the resources.

00:31:04.510 --> 00:31:07.850
Fine. This is about assigning the label.

00:31:08.630 --> 00:31:10.290
This is just.

00:31:10.770 --> 00:31:12.650
Part one of the story.

00:31:12.650 --> 00:31:18.650
Part two of the story is these labels will be used.

00:31:20.770 --> 00:31:22.690
In the selector queries.

00:31:25.990 --> 00:31:33.090
Which means if I go back here, it's I'm going to assign that label back.

00:31:38.770 --> 00:31:42.090
Where is the label command to assign it?

00:31:48.230 --> 00:31:49.910
Environment equals dev.

00:31:52.010 --> 00:31:57.230
Okay, I assigned a part. I sent a label. If I do Cube CTL get parts.

00:32:03.490 --> 00:32:05.970
It shows four parts.

00:32:06.150 --> 00:32:09.670
So now I can write an selector query.

00:32:10.070 --> 00:32:12.230
I find a friend selector.

00:32:13.890 --> 00:32:22.230
And here I can specify get all the parts that has the label environment equals dev.

00:32:23.490 --> 00:32:30.850
So as you can see only one part of that because that's the only one that has the that label.

00:32:31.070 --> 00:32:41.090
So similarly you can do some more thing like all the parts which has the label that doesn't have this value if it has a key or if it is in one of it.

00:32:41.090 --> 00:32:46.370
So there are many many operators that you can use it here to filter down the result set.

00:32:46.370 --> 00:32:54.370
Okay, so you can use labels in an selector query like this.

00:32:58.770 --> 00:33:07.030
So if you remember I mentioned about this replica sets duty that is CS equals DS job.

00:33:07.030 --> 00:33:12.130
And I also mentioned it's running an infinite reconciliation.

00:33:13.170 --> 00:33:22.070
Loop logic what it actually does is it execute this command Cube CTL get parts.

00:33:22.970 --> 00:33:27.550
I find a friend selector and then these two labels.

00:33:27.550 --> 00:33:32.450
This is the exact query that this is the exact query that replica set is executed.

00:33:32.450 --> 00:33:39.430
So if it executes this command, it will simply look for the count how many results set returned by this query 3.

00:33:39.430 --> 00:33:45.850
What is the desired state for mismatch create a new part?

00:33:46.390 --> 00:33:51.890
Assign this to label on it the next moment when it does then four will match for four equals.

00:33:51.890 --> 00:33:54.930
Okay, so this is the exact query that replica set execute.

00:33:55.370 --> 00:34:01.210
So this overall hierarchy is completely loosely coupled.

00:34:01.310 --> 00:34:05.670
It's all tied up only by the labels nothing more than that.

00:34:05.670 --> 00:34:14.090
Okay, so now to the scenario that someone has mentioned right there's the question related to quarantining and adapting a part.

00:34:15.030 --> 00:34:21.870
So let's say I'm going to Cube CTL get parts.

00:34:22.930 --> 00:34:35.570
Okay, so here you can see we have four parts that has this labels my deploy and then the part template hash blah blah blah stuff.

00:34:35.570 --> 00:34:42.810
Right. So now I'm going to apply the same Cube CTL apply if any.

00:34:47.030 --> 00:34:52.510
I'm going to create the deployment again and if you do Cube CTL get deploy.

00:34:54.770 --> 00:34:57.490
It has the file.

00:34:59.470 --> 00:35:01.790
Cube CTL get parts.

00:35:03.970 --> 00:35:13.030
So if you if you look at here it has five four and it was its five.

00:35:13.050 --> 00:35:22.990
So basically this is the only part that is created newly remaining others are simply just got adopted by the replica set

00:35:23.790 --> 00:35:29.630
because when the first time it tried to perform the reconciliation loop looking for a specific label.

00:35:30.910 --> 00:35:37.230
It already got four and it already got four entries matching.

00:35:37.810 --> 00:35:42.430
So it simply adapted those part and it's pinned up just one new point.

00:35:44.250 --> 00:35:50.110
Okay, so this is that adapting scenario show labels.

00:35:52.510 --> 00:35:58.330
You can see it's created all so the replica set it created it got the same ID.

00:35:58.330 --> 00:36:04.530
So if it is going to create a different replica set with a different ID then.

00:36:05.170 --> 00:36:06.790
Any parts will be created.

00:36:08.570 --> 00:36:18.550
Okay, so in general in the etcd database it stores the previous replica set IDs with IDs available simply reusing that same replica set right fine.

00:36:18.690 --> 00:36:21.650
So now look to the scenario of quarantining.

00:36:21.650 --> 00:36:23.310
Let's say I'm going to delete.

00:36:24.330 --> 00:36:31.650
The label from this part KB 5 gr power right and let's see how we are going to quarantine this part.

00:36:34.170 --> 00:36:37.650
Cube CTL label part.

00:36:40.290 --> 00:36:41.450
Name of the part.

00:36:44.610 --> 00:36:46.050
Key minus.

00:36:49.590 --> 00:36:51.830
Cube CTL label part.

00:36:52.250 --> 00:36:54.050
Okay, Mr. R at the end.

00:36:57.550 --> 00:37:00.950
And if I do keep CTL get parts again.

00:37:03.250 --> 00:37:06.090
See how many are there now one is newly created earlier.

00:37:06.210 --> 00:37:13.130
It was five now I have six and this container I removed that app equals my deploy.

00:37:16.110 --> 00:37:17.390
Which means.

00:37:18.670 --> 00:37:35.510
When the after I removing that part the replica set did the reconciliation loop logic and it does that Cube CTL get part if an FN selector with those two labels only four got matched because this doesn't have that label.

00:37:35.510 --> 00:37:46.030
But the desired is five so it's been up one part this one to make it to so which means this is now an orphan.

00:37:47.410 --> 00:37:49.170
If you delete this nothing will happen.

00:37:49.970 --> 00:37:51.990
Which means we quarantine a part.

00:37:53.550 --> 00:37:57.090
If I put the label back then it will get adopted.

00:37:58.730 --> 00:38:03.970
Let's say I'm going to put the label back on the same part app equals.

00:38:04.370 --> 00:38:05.090
My deploy.

00:38:13.630 --> 00:38:14.270
Okay.

00:38:15.850 --> 00:38:16.510
Get parts.

00:38:17.530 --> 00:38:21.290
So you can see here now from the six.

00:38:23.010 --> 00:38:25.630
Six match to the label, which is one extra.

00:38:25.650 --> 00:38:27.890
So it randomly picked one and deleted it.

00:38:27.890 --> 00:38:30.290
So now the count become five.

00:38:32.590 --> 00:38:36.290
Okay, the reason why we are explaining this.

00:38:38.110 --> 00:38:46.190
Because now you are clear with how this hierarchy is constructed and how it purely works by the labels.

00:38:46.830 --> 00:38:55.830
What it all takes to assign a label and assign a label and how to use labels in the selector query.

00:38:57.490 --> 00:39:02.770
And with the help of label how to quarantine and adopt and part.

00:39:04.450 --> 00:39:12.490
Okay, just it's looking only the labels that it's not even seeing whether the one that it is going to adapt belongs to the same family or not.

00:39:12.490 --> 00:39:17.550
It's not even looking that as long as the label matches just adopts our coordinates.

00:39:19.550 --> 00:39:20.270
Okay.

00:39:20.510 --> 00:39:24.910
So with this go ahead.

00:39:24.990 --> 00:39:26.330
Cube controller.

00:39:26.790 --> 00:39:27.230
Good question.

00:39:27.510 --> 00:39:28.230
Right.

00:39:28.390 --> 00:39:29.430
Cube controller.

00:39:30.150 --> 00:39:30.770
Cube controller.

00:39:30.950 --> 00:39:32.890
So actually we created a part.

00:39:33.050 --> 00:39:35.690
We created replica set deployment.

00:39:36.190 --> 00:39:39.490
We are going to deal work with demon set stateful set job.

00:39:40.430 --> 00:39:41.250
Cron job.

00:39:41.250 --> 00:39:49.570
So there are dozens of core kubernetes resources and there are there exists controller for every resources.

00:39:49.950 --> 00:39:55.270
All of these controllers are rolled up into one single binary called cube controller.

00:39:56.230 --> 00:40:04.670
So for every resources it it works the way that we are discussing because the controller is taking care of that logic for the replica set.

00:40:04.670 --> 00:40:07.250
We have replica set controller that's performing that logic.

00:40:10.330 --> 00:40:10.770
Correct.

00:40:11.530 --> 00:40:13.450
Correct cube controller will work with the API server.

00:40:15.070 --> 00:40:22.030
And then and then it go to the scheduler and then scheduler will send to the cubelet and then scheduler will simply assign a node.

00:40:22.650 --> 00:40:23.050
That's it.

00:40:23.050 --> 00:40:26.770
If you scale up or scale down and then the cubelet is the one that's performing the end action.

00:40:26.950 --> 00:40:27.390
Okay.

00:40:27.450 --> 00:40:32.570
So with this I'm going to give a quick pass here for you to try these commands.

00:40:32.750 --> 00:40:32.970
Yep.

00:40:32.970 --> 00:40:33.730
Go ahead please.

00:40:51.890 --> 00:40:52.450
Okay.

00:40:52.710 --> 00:40:54.410
That's zero downtime deployment.

00:40:54.410 --> 00:40:57.450
It works in a different way that we will explore later.

00:40:57.450 --> 00:41:01.830
The reason why I explained that unlabeling part here is for better understanding.

00:41:01.990 --> 00:41:04.310
Maybe the use case you can think of.

00:41:04.970 --> 00:41:18.630
You are seeing that there is something going wrong with this specific part and you suspect that it may got compromised and you want to immediately unlabel it and then disconnect from the network.

00:41:19.270 --> 00:41:24.850
So that you can troubleshoot or analyze that specific container separately something like that.

00:41:24.970 --> 00:41:26.070
That's a rare case.

00:41:26.270 --> 00:41:26.670
Right.

00:41:27.090 --> 00:41:33.050
But for the zero downtime deployment it's going to take a different approach that we will discuss when we talk about deployment resource.

00:41:43.630 --> 00:41:44.690
You are right.

00:41:44.910 --> 00:41:45.450
You are right.

00:41:45.550 --> 00:41:50.150
But we are at the mini queue where you have cluster administrative access so you are able to do it.

00:41:50.290 --> 00:41:53.690
But in your cluster level you can put some access control on.

00:41:54.090 --> 00:41:54.290
You won't.

00:41:54.450 --> 00:41:57.690
Nobody will be able to do it or something like that.

00:41:58.190 --> 00:42:03.070
And yeah so you can have more control on the labels part.

00:42:04.170 --> 00:42:05.090
But yes you are right.

00:42:05.330 --> 00:42:12.130
If different teams are going to work in the same namespace and if they are sharing the same labels yes things will mess up.

00:42:12.730 --> 00:42:18.970
So it's important to have a unique label for your applications as per your organization structure.

00:42:18.970 --> 00:42:36.650
Let's say if you are under a specific tribe under a specific squad specific team then have the labels reflect your structure over the organization structure so that you can easily manage it all.

00:42:40.450 --> 00:42:44.650
Okay I'm going to paste this commands in the etherpad.

00:42:49.890 --> 00:42:52.030
Creating and deployment.

00:42:55.390 --> 00:42:57.670
Oh my God what happened to this page.

00:42:59.170 --> 00:43:00.390
Okay still there.

00:43:01.250 --> 00:43:06.630
Okay here and there I made some typo errors so please ignore this commands while you are trying it.

00:43:06.630 --> 00:43:16.870
But in a nutshell what you are going to try now in this hands on time you are going to create a deployment by a file and you are going to understand that hierarchy.

00:43:17.030 --> 00:43:19.630
How replicas and replicas it creates the part correct.

00:43:21.030 --> 00:43:24.190
And then you are going to try the scale command delete.

00:43:25.770 --> 00:43:29.870
Delete commands and then you are going to learn.

00:43:30.410 --> 00:43:30.970
Labels.

00:43:31.610 --> 00:43:38.310
What it all takes to assign or unless in a label and using the labels in the selector query.

00:43:39.050 --> 00:43:44.790
And if you are interested you can try those quarantine and adapting containers by labeling and labeling.

00:43:45.770 --> 00:43:49.330
Okay replica set labels label selector.

00:43:50.090 --> 00:43:52.810
These three new resources that we learned now.

00:43:52.890 --> 00:43:57.930
Let's take some 10 to 15 minutes to complete this hands on.