8 videos 📅 2022-10-26 09:00:00 Africa/Blantyre
1:21
2022-10-26 10:40:47
48:29
2022-10-26 11:22:27
1:32
2022-10-26 14:03:58
22:02
2022-10-26 14:09:12
18:32
2022-10-26 14:38:39
1:26:59
2022-10-27 09:43:27
2:08:07
2022-10-27 13:00:05
5:19
2022-10-28 09:57:45

Visit the Certified Kubernetes Administrator Exam Preparation course recordings page

United Arab Emirates - Certified Kubernetes Administrator (CKA) - exam preparation

                WEBVTT

00:00:16.180 --> 00:00:20.960
Hello, welcome back.

00:00:24.320 --> 00:00:27.020
So let's wait for a few more minutes.

00:00:54.720 --> 00:00:59.960
Alright, I think we got almost everyone. We can start.

00:01:02.040 --> 00:01:07.820
So the last topic that we discussed before our lunch break is about config map and secrets.

00:01:08.240 --> 00:01:15.780
What it all takes to inject config map as environment variables, as files, so those things we discussed in detail.

00:01:15.780 --> 00:01:20.280
Correct. So with that, that topic comes to an end.

00:01:21.460 --> 00:01:24.280
Now we are going to enter into a new topic.

00:01:25.380 --> 00:01:29.020
Here in this topic, we are going to discuss about three resources.

00:01:30.180 --> 00:01:36.900
Persistent volume, persistent volume claim, storage classes or SC in short.

00:01:37.460 --> 00:01:44.180
PV, PVC and SC. So these are all the three resources we are going to discuss and then we are going to do a quick hands on on it.

00:01:44.180 --> 00:01:47.960
OK, so if you still remember.

00:01:49.560 --> 00:01:53.160
To define a volume in a pod specification.

00:01:54.820 --> 00:01:56.100
What we do?

00:01:56.660 --> 00:01:58.580
Two step process we tried one.

00:01:59.620 --> 00:02:00.940
We define a volume.

00:02:02.740 --> 00:02:07.260
We give something like name of your volume and then the storage provider.

00:02:07.380 --> 00:02:10.480
Let's say if it is a host path or NFS.

00:02:10.480 --> 00:02:14.480
And then the specification related to that, right?

00:02:14.560 --> 00:02:18.700
So this is about creating the volume and within the container specification.

00:02:19.000 --> 00:02:24.760
We will have something called volume mounts and here we specify name of the volume.

00:02:26.100 --> 00:02:28.560
And then the path within the container.

00:02:30.380 --> 00:02:34.480
Where you want to mount that will give it to the mount part, isn't it?

00:02:35.300 --> 00:02:38.040
So this is step number one and this is step number two.

00:02:39.040 --> 00:02:41.500
So if you look at here.

00:02:41.800 --> 00:02:43.500
Defining the volume.

00:02:46.100 --> 00:02:49.820
Are creating the volume is defined in the pod specification.

00:02:49.880 --> 00:02:51.440
This is in pod specification.

00:02:51.860 --> 00:02:54.540
Wherein we put that information.

00:02:55.040 --> 00:02:57.720
If tomorrow if you want to change to a different.

00:02:58.540 --> 00:03:04.280
From NFS you want to change to host path or to an Azure disk or to NGC persistent disk.

00:03:05.360 --> 00:03:07.280
Then you need to keep changing here.

00:03:07.280 --> 00:03:11.140
Which means even though nothing get affected in the pod specification.

00:03:11.720 --> 00:03:16.340
You need to deploy this ML again and again for this to take effect, right?

00:03:16.460 --> 00:03:22.060
So which means pod specification and then the volume specification.

00:03:24.340 --> 00:03:30.020
Is tightly coupled changes in one will deploy changes deployment for both.

00:03:31.060 --> 00:03:36.200
And also there is no way we can directly manage this mounted volumes.

00:03:36.200 --> 00:03:40.040
Is there a way I can view get volumes and see these volumes and all right.

00:03:40.180 --> 00:03:45.560
So to make it loosely couple and to better manage the volumes.

00:03:46.200 --> 00:03:52.860
Kubernetes resource came up with two new resources called persistent volume and persistent volume claim.

00:03:54.280 --> 00:03:56.480
So the idea here is.

00:03:56.880 --> 00:04:00.500
Okay, let's not have this definition here.

00:04:01.440 --> 00:04:03.740
Remaining still still say the same.

00:04:03.740 --> 00:04:06.740
Remove the volume specification from the.

00:04:08.420 --> 00:04:09.320
Pod specification.

00:04:10.220 --> 00:04:12.940
Okay, and then for the volumes.

00:04:14.840 --> 00:04:16.400
Let's create an email file.

00:04:16.620 --> 00:04:23.100
Let the administrator create one email file with kindness and persistent volume.

00:04:24.040 --> 00:04:29.740
And here it's going to have the specification and here it's going to tell it's an NFS.

00:04:29.790 --> 00:04:35.830
And this is the server details and within which they export folder.

00:04:36.010 --> 00:04:38.870
Okay, so your system administrator.

00:04:40.370 --> 00:04:47.230
Will create an persistent volume email file like this and he will submit it to the cluster.

00:04:48.350 --> 00:04:57.650
So which means after submitting if he does keeps it till PV get PV can actually see the volume created a volume with the name.

00:05:00.450 --> 00:05:01.570
Let's say.

00:05:03.990 --> 00:05:09.110
My back and anything name doesn't matter here so in a volume created here.

00:05:10.790 --> 00:05:20.810
So if you submit a volume will be created and in the result you can see that the volume is unbound which means a volume created nobody is using this volume yet.

00:05:22.150 --> 00:05:26.310
It's not bound to any part yet it's unbound it will be an unbound state.

00:05:28.230 --> 00:05:34.650
Okay, so what your system administrator will do is similarly he will create multiple PVs.

00:05:35.950 --> 00:05:41.650
Maybe one with the Azure this one with the GC this one with the host but maybe he will create multiple.

00:05:43.330 --> 00:05:48.590
PVs so which means at the end what they have is a pool of persistent volumes.

00:05:49.910 --> 00:06:00.810
He basically statically provision the volumes and now in the kubernetes he has pool of.

00:06:01.310 --> 00:06:08.390
PVs few are allocated out of you are not allocated to any part you have those PVs.

00:06:08.910 --> 00:06:16.230
Now the next step is let's say your application here it needs a volume.

00:06:18.830 --> 00:06:25.590
And you can have some specification that let's say if you want an one TB pen drive what you will do you will go to an.

00:06:27.370 --> 00:06:36.270
Shop and you will ask the service personal that hey I want one TB pen drive in Kingston.

00:06:36.550 --> 00:06:47.270
Okay, so you are you are asking the shop salesperson and he is going to check in his tracks and if one is found is going to give it back to give it to you.

00:06:47.970 --> 00:06:50.770
So which means you are making a claim.

00:06:51.990 --> 00:06:53.370
And the sales person.

00:06:54.790 --> 00:06:58.350
Is honoring your claim if a match is found.

00:06:59.430 --> 00:07:07.510
In his warehouse so similarly your part is going to ask what it needs via persistent volume claim.

00:07:09.010 --> 00:07:18.510
PBC let's say my PVC which means for the PVC you need to as application team they will submit the PVC PVC dot email.

00:07:20.610 --> 00:07:27.950
Kinders and PVC under the specification you will ask hey I want.

00:07:28.250 --> 00:07:34.670
Some 10 GB in a network file share that's where this network file share is located it's not it's not specified here in the PVC.

00:07:34.670 --> 00:07:38.110
We are just asking I want to end up as 10 GB that's it.

00:07:39.130 --> 00:07:49.490
So if you submit this PVC to the cluster what cubaness will do is it will try to find a match in this match from the volume pool.

00:07:50.110 --> 00:07:57.970
Out of 10 provision volumes that are there in the volume pool is there anything that is NFS and have 10 GB.

00:07:59.170 --> 00:08:03.230
If one is found then that baby will be bound to this PVC.

00:08:04.330 --> 00:08:09.410
It's more of it's just one to one mapping only one day one baby one baby that's it.

00:08:10.430 --> 00:08:13.770
If no match is found then your PVC will be in pending state.

00:08:14.210 --> 00:08:18.670
Maybe in future if then matching babies created then it will get down to it.

00:08:20.410 --> 00:08:24.730
So step one is your administrator should provision the volumes.

00:08:24.990 --> 00:08:29.530
Step two is you need to submit the volume PVC claim.

00:08:29.530 --> 00:08:31.790
And if these two will be bound.

00:08:33.230 --> 00:08:38.750
And this is the PVC you are referring in your in your part specification.

00:08:42.150 --> 00:08:47.330
Okay which means before applying the part specification you must apply your PVC.

00:08:48.710 --> 00:08:53.870
And it should have a volume bound to it when only your part will start up because.

00:08:54.830 --> 00:08:59.330
The bounded volume should get map loaded to this data directory.

00:09:00.510 --> 00:09:05.710
So by this way you can manage the PVC's cubes it'll get PVC.

00:09:06.630 --> 00:09:11.410
You can see all the PVC's and the statuses get baby and you can manage in statuses.

00:09:11.570 --> 00:09:13.410
And here you are referring only the PVC.

00:09:13.890 --> 00:09:19.570
So by this way they put this much effort to make these things loosely couple and better manage the volumes.

00:09:22.390 --> 00:09:23.070
Okay.

00:09:24.110 --> 00:09:26.130
So that's what this diagram also says.

00:09:27.350 --> 00:09:34.070
So that storage administrator can provision from any of the using any of the storage types.

00:09:34.530 --> 00:09:41.010
Like GCE persistence, AWS blocks, elastic blocks, Azure file, NFS, CSI it can be anything.

00:09:42.610 --> 00:09:49.010
So the flow is administrator provisions the volume maintains a persistent volume pool.

00:09:49.710 --> 00:09:55.730
And your application for your application you are going to submit a PVC file that is the claim.

00:09:56.750 --> 00:10:00.010
And the claim will be honored by the cluster.

00:10:00.750 --> 00:10:03.810
Honored means your PV's will get bound to the PVC.

00:10:05.690 --> 00:10:09.470
And then you will mount the PVC to the container.

00:10:09.890 --> 00:10:12.750
Okay. So here in this flow.

00:10:15.570 --> 00:10:18.670
The very first step is provisioning the volume.

00:10:19.970 --> 00:10:26.570
Is there a way instead of manually provisioning should I configure dynamic provisioning.

00:10:27.950 --> 00:10:38.910
So which means if a PVC is submitted asking for Azure disk 10 GB.

00:10:40.610 --> 00:10:43.510
It just submits the PVC. No PV exists let's say.

00:10:44.150 --> 00:10:51.410
Just in time create a 10 GB volume in an Azure disk and then bound that volume to the PVC.

00:10:52.910 --> 00:11:01.030
If he's asking for NFS 20 GB just in time create it and then provision it and then bound to this PVC.

00:11:01.070 --> 00:11:03.630
That is dynamic volume provisioning.

00:11:03.750 --> 00:11:08.230
We can achieve dynamic volume provisioning with the help of storage class.

00:11:09.870 --> 00:11:12.270
Okay. With the help of storage class we can achieve it.

00:11:12.270 --> 00:11:14.670
So before we see how storage class works.

00:11:15.150 --> 00:11:16.270
Let's quickly do an exercise.

00:11:17.350 --> 00:11:18.890
Hands on on PV and PVC.

00:11:19.450 --> 00:11:22.490
Once we are clear then we can quickly look into the storage class.

00:11:23.390 --> 00:11:26.050
Okay. So if I go back to my machine.

00:11:28.430 --> 00:11:30.670
And I'm going to first create the PV.

00:11:31.110 --> 00:11:36.050
If you still remember we are going to create a PV as the first step.

00:11:36.730 --> 00:11:38.270
That is my PV file.

00:11:39.640 --> 00:11:42.640
Okay. Here it is 8.4 NFS volume.

00:11:43.060 --> 00:11:45.940
This is the one. Kindness and persistent volume.

00:11:46.580 --> 00:11:47.940
The name of the PV.

00:11:48.360 --> 00:11:50.520
I'm going to call this as my PV.

00:11:51.260 --> 00:11:53.860
This is the base and then you can alter it.

00:11:54.300 --> 00:11:56.840
And if you want label you can put it here.

00:11:58.340 --> 00:12:02.520
And you're creating 1 GB in a network file share.

00:12:03.100 --> 00:12:06.240
But I don't have this network file share right now.

00:12:06.240 --> 00:12:08.100
So it's better.

00:12:08.360 --> 00:12:11.360
I can give some kind of host part.

00:12:11.540 --> 00:12:13.120
Which will always be available.

00:12:13.780 --> 00:12:15.120
Or in the host machine.

00:12:16.340 --> 00:12:17.520
It can be any type.

00:12:18.700 --> 00:12:20.440
I'm going to go with the host part.

00:12:23.460 --> 00:12:27.200
I should be having it somewhere here so that I can copy paste.

00:12:29.240 --> 00:12:30.040
This one.

00:12:48.700 --> 00:12:49.340
Okay.

00:12:50.400 --> 00:12:53.240
Basically I want to create a 1 GB in a host machine.

00:12:54.220 --> 00:12:55.820
Part that I've got.

00:12:57.060 --> 00:13:00.080
And that's going to be called as my PV.

00:13:01.460 --> 00:13:06.220
Okay. There are a couple of other mandatory parameters that you need to provide.

00:13:06.240 --> 00:13:08.020
While creating the PV.

00:13:08.120 --> 00:13:10.240
Number one. Access mode.

00:13:12.040 --> 00:13:13.240
Access mode. That is important.

00:13:14.540 --> 00:13:15.360
There are many.

00:13:15.880 --> 00:13:17.020
Three access mode.

00:13:17.200 --> 00:13:17.840
Many.

00:13:19.980 --> 00:13:20.940
Red ones.

00:13:22.540 --> 00:13:24.500
That is the access mode.

00:13:26.740 --> 00:13:27.240
It is.

00:13:28.100 --> 00:13:29.140
Read it once.

00:13:29.400 --> 00:13:31.860
Read only many. Read it many.

00:13:32.040 --> 00:13:33.240
Read it once part.

00:13:34.340 --> 00:13:36.500
Which means you are mounting a part right.

00:13:36.980 --> 00:13:40.460
So that you want multiple parts to access it.

00:13:41.680 --> 00:13:44.540
Only read only loaded essentially only.

00:13:45.200 --> 00:13:47.900
So those things you can specify with the access mode.

00:13:47.980 --> 00:13:50.520
So that is important. You need to provide the access mode.

00:13:51.440 --> 00:13:53.040
In both PV and PVC.

00:13:53.360 --> 00:13:55.540
Okay. So I'm going back here.

00:13:56.840 --> 00:13:59.000
And I'm going to add access mode.

00:14:03.880 --> 00:14:07.240
It should come under this condition.

00:14:14.180 --> 00:14:16.800
Access mode.

00:14:22.160 --> 00:14:23.560
One.

00:14:27.660 --> 00:14:29.180
Read write many.

00:14:36.280 --> 00:14:38.640
This is an important one.

00:14:39.140 --> 00:14:40.940
Another important parameter is.

00:14:41.240 --> 00:14:44.780
Because we are going to use static provisioning.

00:14:45.820 --> 00:14:49.780
Here we are creating the persistent volume by hand. Right.

00:14:49.780 --> 00:14:51.780
Not we are not using dynamic volume provisioning.

00:14:51.860 --> 00:14:54.980
So if you're using static volume provisioning.

00:14:55.900 --> 00:14:58.320
And you don't want the dynamic volume.

00:14:58.520 --> 00:14:59.160
Provisioner.

00:14:59.160 --> 00:15:00.160
To provision the volume.

00:15:01.420 --> 00:15:04.260
It's better to provide the storage class name as an empty.

00:15:04.600 --> 00:15:06.400
We will discuss about that in a moment.

00:15:06.720 --> 00:15:09.540
But for static provisioning volumes you can set it to empty.

00:15:09.680 --> 00:15:11.760
So that if you're not setting it.

00:15:12.220 --> 00:15:14.460
Then there is a default storage class.

00:15:14.860 --> 00:15:17.600
That will take effect and it will immediately provision the volume.

00:15:18.640 --> 00:15:20.000
Okay. In the PVC arrays.

00:15:21.280 --> 00:15:22.540
You can keep it empty.

00:15:22.880 --> 00:15:25.140
Let's say I'm going to call this as an example.

00:15:25.540 --> 00:15:27.080
It's a mini-literals for now.

00:15:27.940 --> 00:15:31.740
I think this is a minimalistic configuration of a PV.

00:15:33.320 --> 00:15:35.340
1GB in hours. Blah blah blah. Right.

00:15:35.480 --> 00:15:37.080
Let's apply this file 8.4.

00:15:39.220 --> 00:15:41.680
Kubectl get PV.

00:15:42.380 --> 00:15:43.200
Or PV-scout.

00:15:43.300 --> 00:15:45.500
As you can see PVs are cluster scoped.

00:15:46.000 --> 00:15:48.740
You can see there is no namespace information printed here.

00:15:48.820 --> 00:15:50.220
Because it is cluster scoped.

00:15:50.440 --> 00:15:52.640
PVC is namespace scoped.

00:15:52.900 --> 00:15:54.380
PV is cluster scoped.

00:15:56.360 --> 00:16:08.500
Kubectl apply FNF 8.4.

00:16:10.040 --> 00:16:11.420
Storage class name.

00:16:11.440 --> 00:16:14.560
I called it as a storage name class. My bad.

00:16:18.120 --> 00:16:21.240
How if I wrote it as a storage name class?

00:16:22.340 --> 00:16:23.200
Storage class name.

00:16:25.760 --> 00:16:29.340
It's a split-second delay while I'm typing it.

00:16:29.660 --> 00:16:30.720
From here to there.

00:16:31.000 --> 00:16:32.560
Just bear with that.

00:16:32.820 --> 00:16:35.100
Going back to the terminal here.

00:16:38.640 --> 00:16:40.660
Okay. It says PV created.

00:16:40.840 --> 00:16:42.620
Kubectl get PV.

00:16:44.340 --> 00:16:45.680
You can see PV created.

00:16:46.280 --> 00:16:47.280
Access mode.

00:16:47.840 --> 00:16:51.560
Re-game policy. There are couple of re-game policies that we will discuss later.

00:16:52.120 --> 00:16:53.060
Status is available.

00:16:53.060 --> 00:16:56.640
Which means it's not yet bound to any PVC. It's available.

00:16:57.200 --> 00:17:00.060
Claim. No claims are bound to it. So it is empty.

00:17:00.660 --> 00:17:02.100
And the storage class name is printed.

00:17:02.420 --> 00:17:05.700
Okay. So we successfully created a PV.

00:17:05.920 --> 00:17:07.180
Step 1 completed.

00:17:07.560 --> 00:17:10.060
Step 2. For one of the application.

00:17:13.780 --> 00:17:16.320
Let's say we are going to run a MySQL database.

00:17:17.780 --> 00:17:22.360
And MySQL database needs a volume so that it can persist the data.

00:17:23.060 --> 00:17:25.060
Let's say it stores all the data in varlib MySQL.

00:17:25.960 --> 00:17:27.060
We want to persist this.

00:17:27.980 --> 00:17:30.740
So for that it's going to need a volume.

00:17:30.800 --> 00:17:33.700
So it creates a PVC.

00:17:33.800 --> 00:17:34.960
It's going to create a PVC.

00:17:35.560 --> 00:17:37.060
The name of the PVC is.

00:17:37.620 --> 00:17:38.600
Let's say database.

00:17:39.880 --> 00:17:43.480
Okay. So which means we first need to create the PVC.

00:17:43.920 --> 00:17:45.060
Take iFile.

00:17:45.140 --> 00:17:47.520
Open it. So this is the PVC.

00:17:47.960 --> 00:17:50.060
I'm going to call this as my PVC.

00:17:51.660 --> 00:17:54.320
And as you see, as you can see here.

00:17:55.060 --> 00:17:56.740
We are specifying only.

00:17:58.340 --> 00:18:01.420
We need one GB. That's not more than that.

00:18:01.460 --> 00:18:03.900
In fact, you can specify in which type you want.

00:18:04.000 --> 00:18:06.220
Any specific type you can specify.

00:18:07.160 --> 00:18:10.040
And we are also specifying the label selector here.

00:18:10.540 --> 00:18:14.640
Which means in your PV pool you may have hundreds of PV.

00:18:16.240 --> 00:18:18.960
Look for PV with this label.

00:18:18.960 --> 00:18:21.640
Maybe out of 100, maybe three will filter.

00:18:21.720 --> 00:18:24.200
Out of three it will try to allocate one for you.

00:18:24.400 --> 00:18:26.260
So you can also specify label selector.

00:18:28.220 --> 00:18:28.580
Okay.

00:18:28.920 --> 00:18:32.580
And then again I'm going to specify the access mode here.

00:18:33.240 --> 00:18:37.320
You need to specify the same access mode so that that part will be matched to this.

00:18:38.860 --> 00:18:42.100
I'm going to give this these two properties.

00:18:43.220 --> 00:18:44.920
In my PVC as well.

00:18:45.760 --> 00:18:49.920
So that this PVC will match to the PV.

00:18:55.380 --> 00:18:56.600
All right.

00:18:56.720 --> 00:19:04.240
So with this change I'm going to apply this YAML file.

00:19:08.720 --> 00:19:10.240
Get PVC.

00:19:11.280 --> 00:19:13.580
Then give it here.

00:19:14.140 --> 00:19:14.200
Apply.

00:19:18.300 --> 00:19:19.240
Volume claim.

00:19:19.940 --> 00:19:21.280
If you see created.

00:19:21.980 --> 00:19:22.720
Give it here.

00:19:22.900 --> 00:19:23.840
Get PVC.

00:19:26.740 --> 00:19:29.300
Look at this is already says in bound state.

00:19:29.460 --> 00:19:32.720
We created and already the existing PV.

00:19:32.720 --> 00:19:33.860
Bound to this.

00:19:34.040 --> 00:19:36.080
You can get PV and you can verify that.

00:19:36.380 --> 00:19:38.120
Look at it.

00:19:38.380 --> 00:19:40.600
My PVC from NPOV and NS.

00:19:41.460 --> 00:19:43.540
This PV is bound to this PVC.

00:19:46.220 --> 00:19:47.720
Which means all good.

00:19:49.360 --> 00:19:50.580
If you go now.

00:19:51.060 --> 00:19:54.020
And if you submit this YAML file SQL.

00:19:55.860 --> 00:19:57.660
The name of the PVC is my PVC.

00:19:59.080 --> 00:20:06.440
Then your SQL server will look perfectly fine and all the data that is in here will go to the.

00:20:07.480 --> 00:20:08.120
Host.

00:20:09.940 --> 00:20:10.580
Okay.

00:20:15.020 --> 00:20:18.020
So what is more important here is.

00:20:18.160 --> 00:20:23.380
Understanding what PV and what we did this and static provisioning.

00:20:23.600 --> 00:20:24.920
From that access mode.

00:20:24.920 --> 00:20:29.920
And then you can specify the reclaim policy and all that we will discuss after.

00:20:31.360 --> 00:20:33.320
All of this information create a PV.

00:20:34.120 --> 00:20:36.280
And then from your application perspective.

00:20:37.000 --> 00:20:39.240
For your application need you are going to submit a PVC.

00:20:39.840 --> 00:20:42.320
And this PVC is referred in your pod specification.

00:20:44.420 --> 00:20:47.680
Okay. If you are submitting a PV and it is unbound state.

00:20:48.020 --> 00:20:50.480
If you try to spin up the pod it will fail.

00:20:51.480 --> 00:20:53.340
Because it will try to mount the volume.

00:20:53.340 --> 00:20:55.480
There is no volume mapped to the PVC.

00:20:56.180 --> 00:20:57.940
So pod will not start up.

00:20:58.900 --> 00:21:00.720
So you need to first make sure it is bound.

00:21:00.900 --> 00:21:02.600
Then only you need to run your application.

00:21:07.460 --> 00:21:07.940
Okay.

00:21:09.960 --> 00:21:11.560
Alright. So now.

00:21:11.940 --> 00:21:13.620
What I am going to do is.

00:21:14.000 --> 00:21:15.600
I am going to create one more claim.

00:21:16.620 --> 00:21:17.580
Copy this.

00:21:17.900 --> 00:21:19.260
Altogether a different claim.

00:21:24.100 --> 00:21:28.060
I am going to rename this.

00:21:29.620 --> 00:21:30.440
My PVC.

00:21:36.640 --> 00:21:39.500
Let's say called it as MyPVCtest.yml.

00:21:40.640 --> 00:21:44.820
And here I am going to call this as an.

00:21:45.960 --> 00:21:46.460
ImprovPVC.

00:21:48.700 --> 00:21:50.260
We don't need this label center.

00:21:50.500 --> 00:21:51.820
Let's remove this. Keep it simple.

00:21:53.320 --> 00:21:54.360
Access mode.

00:21:54.920 --> 00:21:56.280
1GB read rate menu.

00:21:56.300 --> 00:21:57.140
That's it.

00:21:57.160 --> 00:22:00.240
And I am going to remove this storage class name also.

00:22:01.520 --> 00:22:02.620
I want 1GB.

00:22:02.980 --> 00:22:05.920
This is the only requirement that I am asking to the Kubernetes.

00:22:07.580 --> 00:22:08.260
Through PVC.

00:22:09.080 --> 00:22:09.400
Right.

00:22:09.800 --> 00:22:11.820
Let's see what happens if I apply this.

00:22:16.260 --> 00:22:17.220
Kubectl.

00:22:17.820 --> 00:22:17.880
Apply.

00:22:18.200 --> 00:22:20.500
My PVC.

00:22:24.360 --> 00:22:25.320
Kubectl.

00:22:25.680 --> 00:22:26.280
Get PVC.

00:22:30.360 --> 00:22:30.700
Type.

00:22:31.240 --> 00:22:31.560
PC.

00:22:33.160 --> 00:22:37.720
So as you can see here.

00:22:39.100 --> 00:22:41.340
This is also showing now bound.

00:22:41.340 --> 00:22:44.160
And it is bound to also an volume.

00:22:44.300 --> 00:22:46.700
The volume name looks very.

00:22:47.700 --> 00:22:48.300
Random.

00:22:49.840 --> 00:22:52.080
And the storage class is standard.

00:22:54.720 --> 00:22:57.060
If I do get PV.

00:23:03.880 --> 00:23:04.480
Kubectl.

00:23:04.620 --> 00:23:05.380
Get PV.

00:23:05.520 --> 00:23:06.880
What happened to my machine.

00:23:09.080 --> 00:23:09.620
Okay.

00:23:09.820 --> 00:23:10.200
Here we go.

00:23:10.200 --> 00:23:11.760
This is the one that we created.

00:23:12.180 --> 00:23:15.620
This is the one that is dynamically provisioned by the Kubernetes.

00:23:17.560 --> 00:23:19.200
Using the storage class name standard.

00:23:21.400 --> 00:23:21.920
Okay.

00:23:22.160 --> 00:23:23.360
So which means.

00:23:23.920 --> 00:23:25.340
If I query for Kubectl.

00:23:25.640 --> 00:23:26.600
Get SC.

00:23:30.920 --> 00:23:32.200
As part of the minikube installation.

00:23:33.000 --> 00:23:35.380
There is one storage class that is there.

00:23:35.660 --> 00:23:37.060
Name of the storage class standard.

00:23:37.140 --> 00:23:38.520
And that is set to default.

00:23:38.520 --> 00:23:42.960
And it's using minikube host path provisioner.

00:23:43.080 --> 00:23:44.620
So I submitted a PVC.

00:23:45.180 --> 00:23:47.700
Without specifying any storage class name.

00:23:47.920 --> 00:23:49.560
So the default one executed.

00:23:49.720 --> 00:23:51.560
So it provisioned in a host path.

00:23:52.040 --> 00:23:54.460
And then it bound that.

00:23:54.460 --> 00:23:56.660
Dynamically provisioned volume to your PVC.

00:23:58.540 --> 00:24:01.820
Which means what all you need to do is.

00:24:03.300 --> 00:24:04.680
If you want to.

00:24:09.400 --> 00:24:13.600
Let's say in your team you are going to use storage from different providers.

00:24:14.400 --> 00:24:14.920
Let's say.

00:24:14.920 --> 00:24:15.940
You have.

00:24:15.980 --> 00:24:17.520
You are going to use Azure disk.

00:24:20.140 --> 00:24:21.500
GCE persistent disk.

00:24:21.540 --> 00:24:23.620
And then you are going to use the host path.

00:24:23.900 --> 00:24:24.440
Let's say.

00:24:25.740 --> 00:24:29.720
So for each of this you are going to have some kind of provisioner.

00:24:29.720 --> 00:24:30.560
Provisioner component.

00:24:30.980 --> 00:24:33.520
Basically installing it as a plugin is going to give the provisioner.

00:24:35.280 --> 00:24:35.800
Provisioner.

00:24:37.340 --> 00:24:37.860
Okay.

00:24:38.260 --> 00:24:41.260
Provisioner is the one that's going to actually provision the word.

00:24:41.800 --> 00:24:44.220
I mean creating the volume there and then.

00:24:44.240 --> 00:24:45.940
This is the one that's going to do every step.

00:24:46.680 --> 00:24:50.520
So by referring this provisioner you need to create a storage class.

00:24:51.060 --> 00:24:51.780
Storage class.

00:24:52.020 --> 00:24:52.640
Storage class.

00:24:52.760 --> 00:24:54.980
Let's say the name of the storage class is Azure.

00:24:55.800 --> 00:24:58.020
This name of the storage class is GCP.

00:24:58.720 --> 00:25:00.800
Name of this storage class is.

00:25:01.380 --> 00:25:01.900
Minikube.

00:25:02.140 --> 00:25:02.660
Let's say.

00:25:05.200 --> 00:25:13.920
So while users are submitting PVC in their email file they can specify which storage class they want to use.

00:25:13.920 --> 00:25:18.660
Let's say for events application he wanted in a Google platform so he can simply give GCP.

00:25:20.620 --> 00:25:22.540
Because storage class is GCP.

00:25:23.520 --> 00:25:26.740
The moment when you submit this storage class will be used.

00:25:27.120 --> 00:25:28.360
This provisioner will be used.

00:25:29.040 --> 00:25:31.140
And dynamically a volume will be created here.

00:25:31.140 --> 00:25:33.420
And then it will be moved down to this one.

00:25:34.240 --> 00:25:40.220
If Colin submits with storage class name as an Azure then this provisioner will be used to dynamically provision.

00:25:41.980 --> 00:25:47.280
If Darlington submits without storage class name then out of these three you can set one as a default.

00:25:48.720 --> 00:25:51.240
So that if anybody submits without a storage class name.

00:25:52.040 --> 00:25:54.200
They will get involved from the Azure system.

00:25:54.400 --> 00:25:55.140
I should just.

00:25:55.660 --> 00:25:55.900
Okay.

00:25:57.000 --> 00:25:57.480
So.

00:25:57.480 --> 00:26:02.480
It's about you're going to statically provision or dynamically provision it.

00:26:03.900 --> 00:26:08.240
So we already seen one sample for statically provisioning it.

00:26:08.480 --> 00:26:16.480
And we just seen one sample on how if a submitted PVC standard provisioner provision dynamically provisions a volume.

00:26:18.240 --> 00:26:25.980
By all means the PVC must map to an PV or else your application part that covers the PVC it won't start up.

00:26:29.700 --> 00:26:30.320
Okay.

00:26:31.340 --> 00:26:34.180
So with this I'm going to keep a pause here.

00:26:34.540 --> 00:26:35.520
Or any questions.

00:26:36.500 --> 00:26:41.040
If no questions it's time to try this and stop.

00:26:42.140 --> 00:26:43.780
The question has come as meanwhile.

00:26:44.080 --> 00:26:44.640
Any questions.

00:26:45.000 --> 00:26:45.920
Is this clear.

00:26:52.180 --> 00:26:52.580
Yes.

00:26:57.060 --> 00:26:57.580
Good.

00:27:20.100 --> 00:27:23.400
Basically resizing the volume.

00:27:23.780 --> 00:27:24.660
That was the question right.

00:27:24.660 --> 00:27:26.920
Yes you can resize the volume not PVC.

00:27:27.160 --> 00:27:30.600
PVC is just about the time when you bound.

00:27:30.820 --> 00:27:33.020
That's the time it will take effect after that.

00:27:33.580 --> 00:27:36.040
You can expand the volume.

00:27:36.820 --> 00:27:37.660
It will work.

00:27:38.060 --> 00:27:38.820
That won't be a problem.

00:27:39.080 --> 00:27:40.620
It's up to one sponsored buying.

00:27:41.380 --> 00:27:46.380
But mostly as we will end up updating the PV and PVC and then submitting it.

00:27:46.880 --> 00:27:48.400
It is extra size provided.

00:27:49.240 --> 00:27:51.120
Let's say you are you are created a PVC.

00:27:51.120 --> 00:27:57.600
Let's say you are using a dynamic provisioning and you said one GB and already I showed this one GB created.

00:27:58.400 --> 00:28:03.160
Now if you want more space then you need to do that update here.

00:28:04.220 --> 00:28:08.900
And in the in the in the already allocated PV it will it will do update it.

00:28:10.200 --> 00:28:16.440
So this this should your back end storage provider must support that kind of expansion.

00:28:16.440 --> 00:28:18.240
It's not not with the Cuban needs.

00:28:18.360 --> 00:28:19.620
It's with the storage provider.

00:28:19.700 --> 00:28:21.740
Some storage provider supports it.

00:28:21.800 --> 00:28:23.580
Some storage doesn't support.

00:28:24.280 --> 00:28:24.540
Right.

00:28:24.580 --> 00:28:27.540
But most of the cloud provider solution supports it.

00:28:34.580 --> 00:28:34.960
OK.

00:28:36.340 --> 00:28:36.940
Right.

00:28:50.900 --> 00:28:51.420
OK.

00:28:52.860 --> 00:28:53.540
It's OK.

00:28:53.700 --> 00:28:56.440
It's reclaimed policy and we are going to discuss that.

00:28:56.500 --> 00:28:58.760
But let me ask you asked about it.

00:28:58.860 --> 00:29:02.440
What what should happen if you delete a PVC.

00:29:02.640 --> 00:29:02.880
OK.

00:29:02.920 --> 00:29:04.840
That's what the reclaimed policy is all about.

00:29:05.120 --> 00:29:09.380
Let's say we have only three values retain recycle and delete.

00:29:10.240 --> 00:29:16.420
So far dynamically provision volumes the second one that you created you seen in the delete because that's the default volume.

00:29:17.680 --> 00:29:22.500
Which means it will be automatically deleted if you delete the PVC.

00:29:23.320 --> 00:29:28.200
If you delete the PVC PV will also the associated PVC will get deleted.

00:29:29.480 --> 00:29:32.160
If if if you set the reclaim policies and delete.

00:29:32.240 --> 00:29:32.660
OK.

00:29:33.020 --> 00:29:37.460
So if you if you don't want that you want the PV to be because PV is holding some data.

00:29:37.500 --> 00:29:39.880
Why deleting the PVC should delete the PV.

00:29:40.140 --> 00:29:43.400
If you want to retain then you can specify the written policy.

00:29:43.400 --> 00:29:50.080
So which means if you delete the PVC PV won't be the associated PV will not be deleted.

00:29:52.020 --> 00:29:57.060
It will be moved to release space so that you can do some manual recovery on the date.

00:29:57.060 --> 00:29:57.560
All right.

00:29:57.620 --> 00:30:03.880
Similar to that recycle is just I think recycle will simply recycle the data and it will.

00:30:04.780 --> 00:30:06.080
What recycle will do.

00:30:06.180 --> 00:30:07.400
I forget it.

00:30:07.400 --> 00:30:08.480
Let me check.

00:30:15.280 --> 00:30:18.360
It is the common thing that we use.

00:30:18.880 --> 00:30:24.980
Recycle recycle recycle means the volume back into the pool of one once it is released from this claim.

00:30:25.540 --> 00:30:25.720
OK.

00:30:25.780 --> 00:30:28.800
It will be back into the unbound.

00:30:28.840 --> 00:30:33.040
It will be set to unbound so that other PVCs can simply use it.

00:30:33.380 --> 00:30:34.480
I think that I will be deleted.

00:30:36.400 --> 00:30:40.620
That I will be deleted and then the status will be changed to unbound.

00:30:41.240 --> 00:30:46.620
Which means if there are any new PVCs come in maybe if match is found that we will get out to it.

00:30:48.940 --> 00:30:49.220
OK.

00:30:49.660 --> 00:30:50.620
That's it.

00:30:51.340 --> 00:30:52.480
Any other questions.

00:30:56.260 --> 00:30:56.420
OK.

00:30:56.520 --> 00:30:56.960
No questions.

00:30:57.160 --> 00:30:58.180
Please give it a try.

00:30:58.300 --> 00:31:03.780
Let me share those all the two embers that I used here that you can have this as a reference.

00:31:05.360 --> 00:31:08.320
Are you know what to do support those two properties.

00:31:08.320 --> 00:31:09.780
But let me copy paste it.

00:31:43.420 --> 00:31:44.820
OK.

00:31:45.240 --> 00:31:46.640
OK.

00:31:49.440 --> 00:31:49.780
OK.

00:31:55.440 --> 00:31:56.840
OK.

00:31:56.900 --> 00:31:57.440
OK.

00:31:57.700 --> 00:31:59.100
OK.

00:32:05.280 --> 00:32:06.680
OK.

00:32:09.300 --> 00:32:10.700
OK.

00:32:14.540 --> 00:32:14.600
OK.

00:32:19.340 --> 00:32:20.740
OK.

00:33:26.040 --> 00:33:31.700
Thank you, thank you, thank you, thank you, thank you.

00:34:39.740 --> 00:34:40.100
Thank you.

00:35:11.580 --> 00:35:14.120
Thank you.

00:35:57.060 --> 00:35:58.460
Thank you.

00:36:38.300 --> 00:36:41.140
Last two minutes to complete this activity.

00:37:04.660 --> 00:37:07.460
Thank you.

00:37:15.320 --> 00:37:17.940
Alright, time up for this activity.

00:37:22.740 --> 00:37:24.900
Let's go to the next section.

00:37:26.060 --> 00:37:27.860
So what we just completed?

00:37:28.620 --> 00:37:32.540
PV, PVC and then storage class.

00:37:35.180 --> 00:37:40.360
Okay, so the next one we are going to discuss is a pretty straightforward resource.

00:37:41.080 --> 00:37:48.740
A simple, very, very simple resource and it has a valid use case where we can go for using one such resource.

00:37:49.820 --> 00:37:58.260
In your cluster, let's say you have three nodes and Ivan is the administrator.

00:37:59.460 --> 00:38:05.440
He wants to see how each of these nodes are performing.

00:38:06.180 --> 00:38:18.080
So for that, he wants to run some kind of software here in this machine so that it can gather some metrics about how these nodes are performing.

00:38:18.920 --> 00:38:26.080
So he wants to run one here, install one software here, one software in this machine so that it can keep observing this node.

00:38:27.800 --> 00:38:37.320
And he can see all of these metrics gathered from these machines in another software and he can see some kind of visualizations.

00:38:38.420 --> 00:38:43.200
So more often an agent like tool, isn't it?

00:38:43.220 --> 00:38:48.640
An agent or team and like to or he wants to run a lock character, one lock character in every node.

00:38:48.640 --> 00:38:52.700
So in Kubernetes world, everything will run as a part.

00:38:52.700 --> 00:39:04.440
So if even wants to deploy this application, he may go with deployment, setting the replicas replicas as entry replica will create three parts.

00:39:04.440 --> 00:39:10.640
But is this will be placed one one copy of part in every node?

00:39:11.480 --> 00:39:14.860
That is not the guarantee. That is not the guarantee that replica setting up.

00:39:14.860 --> 00:39:23.160
We never discussed about that. If you give the number of replicas as number of nodes, it will always place just one in every node.

00:39:23.960 --> 00:39:30.280
It will try to distribute, but it is not a guarantee that it will run one copy of part in one node.

00:39:30.380 --> 00:39:37.720
But here, in case of events, metrics application or logging collector application, we need that guarantee.

00:39:37.720 --> 00:39:50.420
I want one copy of part in every node because metrics collector, logging collector are those application that falls under that nature.

00:39:52.020 --> 00:40:04.140
So for these kind of agent like or daemon like application processes that you want to deploy and Kubernetes Kubernetes community came up with a new resource called daemon set,

00:40:04.140 --> 00:40:08.920
which means if you create a daemon set, daemon set will also create parts.

00:40:09.480 --> 00:40:16.320
But the reconciliation loop logic of daemon set is create one part in every node.

00:40:18.100 --> 00:40:30.160
That is its logic. Let's say if you delete this part, then the next moment daemon set will realize that this node doesn't have one copy of the part and it will immediately create that one.

00:40:30.160 --> 00:40:33.500
One per node. That's its logic.

00:40:33.500 --> 00:40:34.460
Daemon sets logic.

00:40:37.740 --> 00:40:39.100
That's it.

00:40:40.300 --> 00:40:47.140
Daemon set ensures that all nodes run a copy of your part in bracket.

00:40:47.140 --> 00:40:55.680
They also specified some. So what it means is you have three node, but this the application that you're going to run,

00:40:55.680 --> 00:41:02.000
it may not run in node two because node two is using some kind of legacy hardware.

00:41:02.680 --> 00:41:04.780
This software may not work there.

00:41:04.980 --> 00:41:12.400
So you want this daemon set to focus only on N1 and N3 and exclude N2.

00:41:12.580 --> 00:41:17.400
If you're not by default, it will it will consider all the nodes in scope.

00:41:18.420 --> 00:41:21.060
But this time you want to exclude N2 from the scope.

00:41:21.060 --> 00:41:26.780
If that is the case, then you can make use of the labels concept here.

00:41:27.560 --> 00:41:31.620
You can label the node one as type equals modern.

00:41:32.840 --> 00:41:34.880
Node two as a type equals legacy.

00:41:36.360 --> 00:41:41.960
Node three with label as and gives it a label node N3 type equals modern.

00:41:41.960 --> 00:41:44.940
And then the daemon set specification.

00:41:45.380 --> 00:41:50.480
You can specify the node selector and they can tell type equals modern,

00:41:51.540 --> 00:42:01.540
which means now this daemon set will focus on only the nodes that has this label has one copy of this part.

00:42:02.480 --> 00:42:04.520
So by that way, N2 will be excluded.

00:42:05.400 --> 00:42:12.160
OK, that's the reason it says all or some nodes run a copy of a part.

00:42:12.980 --> 00:42:19.060
And the use cases are if you want to run a storage daemon, log collection daemon or node monitoring daemon on every node,

00:42:19.640 --> 00:42:23.280
then you can run those components as a daemon set.

00:42:23.860 --> 00:42:29.500
Kind of a daemon set. And then here they're running a fluently log collector and then a daemon set.

00:42:30.160 --> 00:42:31.300
Typical what we discussed earlier.

00:42:31.300 --> 00:42:35.920
On the type is daemon set, this is going to create one part in every node.

00:42:36.120 --> 00:42:41.180
If you deploy this yaml file, all we have is only one node.

00:42:41.460 --> 00:42:44.180
So in that one node minikube, one daemon set will be running.

00:42:45.200 --> 00:42:48.800
Let's see if there are any daemons running in the kib system namespace already.

00:42:49.920 --> 00:42:50.800
It's a real.

00:42:57.660 --> 00:43:00.800
In the namespace system.

00:43:03.940 --> 00:43:08.060
OK, kube proxy component is running as a daemon set.

00:43:09.180 --> 00:43:14.480
As all we have is only one node, one is current and it's ready and it's up to date and it's available.

00:43:15.120 --> 00:43:22.280
The node sector here is all the node that has the label of the operating system is unix.

00:43:22.780 --> 00:43:25.960
Minikube node is unix, so it's used for all that.

00:43:28.500 --> 00:43:33.120
Maybe in your infrastructure you can verify because you may have multi node.

00:43:33.880 --> 00:43:35.480
You can easily check this.

00:43:37.820 --> 00:43:39.780
All right, any questions on the daemon set?

00:43:41.420 --> 00:43:44.700
Because it's straightforward and skipping the hands on part of daemon set.

00:43:44.800 --> 00:43:45.620
Any questions?

00:43:47.180 --> 00:43:48.700
The use case or how it works?

00:43:49.920 --> 00:43:51.140
Why we need daemon set?

00:43:51.180 --> 00:43:52.920
Why not replicas it at deployment?

00:44:05.220 --> 00:44:09.280
Right, I take the silences and all good.

00:44:10.500 --> 00:44:13.360
Let's switch to the right place.

00:44:26.960 --> 00:44:34.820
OK, in the daemons in the daemon set, it will always be one in every node.

00:44:35.100 --> 00:44:35.520
That's it.

00:44:35.780 --> 00:44:39.580
There is no scale command is not applicable for daemon set.

00:44:41.400 --> 00:44:49.360
OK, so it's not if you want two parts in every node that is not possible with daemons.

00:44:49.480 --> 00:44:52.080
Just only one in every node.

00:44:52.080 --> 00:44:56.180
So scale command and HPA, all those things are not applicable for daemon set.

00:44:57.700 --> 00:45:01.860
Because scale command is not applicable for daemon set, HPA is also not applicable.

00:45:02.540 --> 00:45:07.340
Wherever scale command is applicable, in all those spaces you can configure HPA.

00:45:07.680 --> 00:45:09.760
But for daemon set, scale is not applicable.

00:45:10.380 --> 00:45:11.680
It will always be one.

00:45:11.780 --> 00:45:12.540
That's it.

00:45:12.700 --> 00:45:13.880
One in every node.

00:45:14.320 --> 00:45:16.280
Not two in or three in.

00:45:21.360 --> 00:45:22.860
Sorry, is that a question you're asking?

00:45:22.860 --> 00:45:23.920
The use case for this?

00:45:24.060 --> 00:45:24.360
Yeah, yeah.

00:45:24.600 --> 00:45:27.660
So the typical use case is a log collection daemon.

00:45:27.720 --> 00:45:34.480
If you remember, OK, as you asked about the use case, let me explain this scenario.

00:45:36.380 --> 00:45:45.580
In your cluster, all the applications, let's say these are all the containers that are running and all the applications are going to throw some logs, isn't it?

00:45:46.280 --> 00:45:52.780
There are multiple ways you can aggregate the logs that are coming from all those individual containers.

00:45:54.020 --> 00:45:58.160
For example, each of your application may directly write to a logging backend.

00:45:58.700 --> 00:45:59.580
Like, let's say you have Elasticsearch.

00:46:00.620 --> 00:46:04.980
All the application may directly write from its app code to the logging backend.

00:46:05.740 --> 00:46:12.800
But this is not the approach that we will do because here we have a tight coupling between logging backend and the app code.

00:46:12.800 --> 00:46:20.120
So tomorrow if I switch to Cassandra from Elasticsearch, then that will record code change in every application and new deployment.

00:46:20.660 --> 00:46:22.300
So we won't go with this approach.

00:46:23.180 --> 00:46:30.360
The standard approach that we follow is if we take one single node, you have multiple containers running.

00:46:31.520 --> 00:46:34.860
And in this node, you also have Docker engine installed.

00:46:35.500 --> 00:46:36.060
Docker engine.

00:46:36.260 --> 00:46:39.360
Kubernetes works with the Docker engine to manage the containers, right?

00:46:39.840 --> 00:46:41.380
Kubernetes is not creating the container.

00:46:41.380 --> 00:46:46.640
It works with underlying Docker engine to create the contents and managing it.

00:46:46.860 --> 00:46:54.960
So what we will advise to the application team is, hey, from your app code, write the logs to standard out.

00:46:56.080 --> 00:46:57.600
And standard errors.

00:46:59.480 --> 00:47:04.600
All the containers will throw the logs to its standard out and standard errors.

00:47:04.600 --> 00:47:08.960
OK, so this Docker is the Docker engine.

00:47:09.160 --> 00:47:17.300
We can actually get the logs from standard outputs from all these containers that are running in that node.

00:47:18.700 --> 00:47:24.600
So along in the same node, node one, I can run a daemon, a log collector daemon.

00:47:25.340 --> 00:47:33.020
Let's say Fluentd and this Fluentd will work with the Docker engine to gather all the logs from all of these containers.

00:47:33.020 --> 00:47:38.120
And this will forward to the centralized logging packet.

00:47:38.580 --> 00:47:47.740
Similarly, on another node, I will have a log collector daemon running that will collect all the logs from the containers that are running in this node to the log collector bin.

00:47:48.400 --> 00:47:50.780
Similarly, another node, log collector packet.

00:47:51.300 --> 00:48:01.440
And on top of it, I may run one kibana and user interface with which I can analyze and find some information out of the log.

00:48:01.440 --> 00:48:08.160
So this is one use case where I will run the log collection daemon that I want to run in every node.

00:48:09.360 --> 00:48:11.480
So I can deploy it as a daemon set.

00:48:11.940 --> 00:48:16.120
If not, then you may need to install in a traditional way.

00:48:16.220 --> 00:48:21.540
But as we are in the Kubernetes world, even the Kubernetes components are also running as parts.

00:48:21.840 --> 00:48:25.060
So here the daemon also we are running as a part.

00:48:25.580 --> 00:48:28.680
It runs as like all other application runs as a part.

00:48:29.560 --> 00:48:31.920
Similarly, the same goes for monitoring daemon.

00:48:32.020 --> 00:48:37.440
If you want to monitor every node that are participating in the cluster to see how it is performing.

00:48:38.160 --> 00:48:44.680
So you can run a monitoring daemon that will capture metrics from the nodes and it all will send it to a centralized location.

00:48:45.200 --> 00:48:53.680
And then I can have a storage daemon if there is an host machine provision that you want to run in every node.

00:48:54.680 --> 00:48:57.620
Right. Does this information helps?

00:49:15.100 --> 00:49:16.020
Okay.

00:49:20.080 --> 00:49:26.880
No, no, no, no. No, it's...

00:49:26.880 --> 00:49:27.520
Okay.

00:49:36.240 --> 00:49:40.760
Okay. So first of all, rolling update is not applicable for daemon set.

00:49:40.780 --> 00:49:44.080
Rolling update is applicable only for two resources.

00:49:44.740 --> 00:49:46.000
One is deployment.

00:49:47.420 --> 00:49:52.620
Another one is stateful set that we are yet to discuss. Stateful set.

00:49:52.620 --> 00:49:57.080
So you can't do a rolling update for a daemon set.

00:50:05.880 --> 00:50:07.040
No, no.

00:50:15.540 --> 00:50:17.420
That is with the daemon set.

00:50:17.420 --> 00:50:22.420
Later you can try that sample and do get daemon set and then you can explore it.

00:50:24.500 --> 00:50:29.100
Now let's move to the next object that is stateful set.

00:50:31.020 --> 00:50:32.480
Any idea about stateful set?

00:50:33.400 --> 00:50:35.820
Are you already working with the stateful set?

00:50:40.220 --> 00:50:42.420
Yes, no. I heard about it.

00:50:49.680 --> 00:50:50.760
Okay, fine.

00:50:51.000 --> 00:50:55.680
So let me quickly explain the case for stateful set.

00:50:55.860 --> 00:51:00.000
Why? Why I already have deployment, replica set, daemon set?

00:51:00.480 --> 00:51:03.140
There are many resources. Why I need one another resource?

00:51:03.760 --> 00:51:07.280
What is this? What is the requirement that this resource handles?

00:51:08.020 --> 00:51:08.320
Right.

00:51:11.140 --> 00:51:15.700
You have your Kubernetes cluster that is running all your application workloads.

00:51:15.700 --> 00:51:19.060
Let's say node 1, node 2, node 3 is there.

00:51:19.380 --> 00:51:23.260
All your backend APIs, frontend, it's all running here.

00:51:24.620 --> 00:51:28.060
But think about the data layer, your database.

00:51:29.540 --> 00:51:36.440
In general, for the databases itself, we will have a cluster setup.

00:51:37.100 --> 00:51:43.940
If it is a Mongo cluster, Mongo database, then we will have a cluster M0, M1, M2.

00:51:44.820 --> 00:51:48.840
And it will have some kind of cluster setup between it.

00:51:49.100 --> 00:51:53.020
And it will have some kind of synchronization logic that's happening.

00:51:53.520 --> 00:51:56.720
Some physical replication, logical replication, right?

00:51:57.000 --> 00:52:01.260
And one will be a primary server and two secondary servers will be there.

00:52:01.620 --> 00:52:04.280
If primary goes down, one will be elected as primary.

00:52:05.340 --> 00:52:07.200
One secondary will be elected as primary.

00:52:07.200 --> 00:52:13.380
So already, for the database-like solutions, you will already have some kind of cluster setup.

00:52:14.900 --> 00:52:18.980
And if you look at this cluster setup or the way these clusters are formed,

00:52:19.940 --> 00:52:22.280
first you will start with the first node.

00:52:22.560 --> 00:52:26.280
You will set up something here and then slowly you will join the M1.

00:52:26.840 --> 00:52:29.020
You will join by its name, right?

00:52:29.020 --> 00:52:32.120
You will go to M1, execute some command, provide the M0 name

00:52:32.120 --> 00:52:34.380
so that this will become part of the cluster.

00:52:34.380 --> 00:52:36.880
And then M3, this will become part of the cluster.

00:52:37.240 --> 00:52:39.580
So that's how you will form the database cluster.

00:52:40.500 --> 00:52:44.300
And once you form the cluster, all the replication logic,

00:52:44.900 --> 00:52:49.720
all those things that happens between the scene, it relies on the identity of this one.

00:52:50.480 --> 00:52:52.620
M1 relies on M0.

00:52:53.400 --> 00:52:58.400
I mean, based on the identity only, they are going to send the data to M0 or M3,

00:52:58.460 --> 00:53:01.200
which means this identity is fixed throughout.

00:53:01.200 --> 00:53:03.480
If you change the identity of the server,

00:53:03.860 --> 00:53:08.120
that will break the cluster setup and you need to fix it, right?

00:53:08.640 --> 00:53:13.280
So this being the case, and then remember, it's all handling with data.

00:53:13.440 --> 00:53:20.400
So back in those days, organizations were completely hesitant to run databases

00:53:20.400 --> 00:53:22.400
inside the Kubernetes cluster.

00:53:22.480 --> 00:53:25.260
It was OK to run the backend APIs, frontends,

00:53:26.100 --> 00:53:30.280
because it was initially meant for, I mean, the entire microservices itself,

00:53:30.280 --> 00:53:32.660
it was meant for stateless applications, right?

00:53:33.580 --> 00:53:36.320
But these are all going to maintain some data.

00:53:36.520 --> 00:53:41.940
And if you run it, of course, you can back up your database with volumes.

00:53:42.520 --> 00:53:46.280
But still, if something goes wrong, it will end up with a data loss, right?

00:53:46.280 --> 00:53:48.420
That's going to cost your organization a lot.

00:53:49.960 --> 00:53:54.980
So organizations were running their database cluster outside of the Kubernetes cluster.

00:53:54.980 --> 00:53:59.040
And all the frontend layers, middle layer layers, service layers,

00:53:59.200 --> 00:54:01.000
it's all running in the Kubernetes cluster.

00:54:02.120 --> 00:54:05.640
OK, that's how it was that 2016, 17 and up.

00:54:06.080 --> 00:54:10.400
And then what happened is Kubernetes community started to analyze this part.

00:54:11.160 --> 00:54:15.520
OK, what needs to be done in the Kubernetes

00:54:15.520 --> 00:54:20.840
so that database like solutions can also run in the Kubernetes cluster?

00:54:22.740 --> 00:54:24.100
What are the requirements?

00:54:24.360 --> 00:54:25.220
List down the requirement.

00:54:25.360 --> 00:54:28.400
And let's see whether we can implement one such resource

00:54:30.040 --> 00:54:31.400
with those capabilities.

00:54:31.960 --> 00:54:34.020
And that's how straightforward it was born.

00:54:35.640 --> 00:54:38.720
So it was not a complex requirement.

00:54:39.300 --> 00:54:41.600
It turned out to be a very simple set of requirements.

00:54:41.840 --> 00:54:48.220
Number one, if you create a deployment, if you set the instances to five,

00:54:48.220 --> 00:54:54.620
then all the parts get some random name, isn't it?

00:54:54.620 --> 00:54:58.220
Random name or ran name with random hash ID.

00:54:59.240 --> 00:54:59.720
Correct.

00:55:00.580 --> 00:55:05.460
And if I delete this, then one another new part created with completely random new names.

00:55:06.440 --> 00:55:12.680
So in case of stateful set for the databases, you must have a fixed identity.

00:55:12.680 --> 00:55:22.480
And that fixed identity will also be in an with an ordinal index, ordered ordinal index,

00:55:23.640 --> 00:55:28.600
which means if you create a stateful set, it's also going to create the parts.

00:55:29.060 --> 00:55:35.040
If you set the replicas to five and if the name of your stateful set is Mongo,

00:55:36.140 --> 00:55:39.780
then the instance that will get created is Mongo zero.

00:55:40.420 --> 00:55:47.840
And the second instance will be Mongo one and then Mongo two, Mongo three and Mongo four.

00:55:48.560 --> 00:55:51.300
So if you look at the name, it will be in an ordered index.

00:55:51.360 --> 00:55:56.240
And then if I give five, all the five will start up at the same time in the deployment work.

00:55:56.440 --> 00:56:00.080
But in stateful set, first Mongo zero will start.

00:56:00.220 --> 00:56:04.620
And once it is ready, then it will go for spinning up Mongo one.

00:56:04.820 --> 00:56:08.740
Once it is ready, then it will go for Mongo to Mongo three and Mongo four.

00:56:10.840 --> 00:56:11.700
Ordered startup.

00:56:12.360 --> 00:56:17.100
If I scale down to five to three, only that four and three will get it in.

00:56:17.200 --> 00:56:18.260
First four will get deleted.

00:56:19.040 --> 00:56:20.200
Three will get deleted.

00:56:20.640 --> 00:56:24.980
Deletion will also be in an order from higher order to lower and lower order indexes.

00:56:25.680 --> 00:56:30.700
Okay. Fixed identity, which means if I delete Mongo one,

00:56:31.140 --> 00:56:35.000
then a replacement will be created and that will also get the same name Mongo.

00:56:35.080 --> 00:56:36.280
It is a fixed identity.

00:56:37.320 --> 00:56:38.800
This makes sense, right?

00:56:38.880 --> 00:56:44.860
For example, if you set three, first M zero needs to initialize Mongo zero.

00:56:45.100 --> 00:56:47.860
It needs to set all of the required stuff in it.

00:56:48.280 --> 00:56:51.280
And then your primary server is running.

00:56:51.820 --> 00:56:54.460
Then you are going to spin up the secondary M one.

00:56:55.040 --> 00:56:58.300
And it's going to execute some command and it's going to part of the cluster.

00:56:58.440 --> 00:57:01.640
And then M three sets up and then it's part of the cluster.

00:57:02.280 --> 00:57:04.560
So that makes sense more from this perspective.

00:57:04.560 --> 00:57:08.260
So that's the reason we need to have that identical passion.

00:57:08.720 --> 00:57:12.640
And when it comes to scale and also the one that with the higher index

00:57:12.640 --> 00:57:15.720
will get first deleted and then go here and then go here.

00:57:16.760 --> 00:57:19.300
So far, the stateful set rolling updates are possible.

00:57:20.100 --> 00:57:24.120
And if I do version upgrade, it will also happen one by one.

00:57:24.300 --> 00:57:28.920
First zero will be updated and then one and then two and then three and then four.

00:57:29.460 --> 00:57:30.480
Everything will be ordered.

00:57:31.140 --> 00:57:36.620
OK, so this is one requirement and the naming and then the fixed identity.

00:57:37.000 --> 00:57:40.880
Second one is you created a stateful set.

00:57:41.400 --> 00:57:45.960
Mongo and you have Mongo zero, Mongo one and Mongo two.

00:57:46.120 --> 00:57:49.400
And this is the primary and these two are secondary servers, let's say,

00:57:49.400 --> 00:57:50.060
for the cluster.

00:57:51.600 --> 00:57:57.120
And you have a front end application that's trying to access the front end

00:57:57.120 --> 00:57:59.720
or back end that's trying to access the database.

00:57:59.720 --> 00:58:00.940
We have three instances.

00:58:01.180 --> 00:58:04.540
So generally, in these cases, we create the service.

00:58:05.980 --> 00:58:07.960
The same you are going to do it here.

00:58:08.640 --> 00:58:14.660
To access this or the stateful set, you are going to create a service.

00:58:16.000 --> 00:58:18.940
And once someone calls the service, in general,

00:58:19.120 --> 00:58:22.680
we know that services is going to route the records to one of it.

00:58:23.040 --> 00:58:25.300
But that is not a valid case here.

00:58:25.680 --> 00:58:29.340
Load balancing is not required here because this is the primary server.

00:58:29.340 --> 00:58:31.960
The request can correctly go to primary server.

00:58:32.360 --> 00:58:35.940
We don't want load balance to secondary service because it's all just doing

00:58:35.940 --> 00:58:39.440
some in sync replica logics between them already.

00:58:40.700 --> 00:58:47.840
So we don't need load balancing logic by the service in the stateful set world.

00:58:48.280 --> 00:58:51.720
So what we will do is while creating the service,

00:58:52.320 --> 00:58:56.380
we will explicitly set cluster IP as sender.

00:58:56.380 --> 00:58:59.760
Because when a service is created, it will get a cluster IP, correct?

00:58:59.820 --> 00:59:02.060
That's the one that load balanced to the endpoints.

00:59:03.400 --> 00:59:09.060
If you set cluster IP to none, which means no cluster IP will get assigned to it

00:59:09.060 --> 00:59:12.600
because all these instances got a fixed identity.

00:59:13.880 --> 00:59:18.700
So this front end can simply directly reach to it like M0.

00:59:19.400 --> 00:59:21.300
Let's say the name of the service is Mongo.

00:59:22.460 --> 00:59:23.540
M0.mongo.

00:59:23.700 --> 00:59:26.860
It can simply directly reach to it because this identity is fixed.

00:59:27.600 --> 00:59:30.240
In these cases, all these identities were not fixed.

00:59:30.300 --> 00:59:31.560
So we relied on service.

00:59:31.620 --> 00:59:35.440
So you can create a service and you can set the cluster IP to none.

00:59:36.280 --> 00:59:42.400
If you create a service with cluster IP to none, it's called headless service.

00:59:45.860 --> 00:59:46.940
Headless service.

00:59:46.940 --> 00:59:51.240
We call that service as headless service, which means for stateful set,

00:59:51.980 --> 00:59:55.880
we will create headless service only for stateful set.

00:59:56.700 --> 00:59:58.240
Because that's where we don't need that.

00:59:58.580 --> 01:00:01.500
Load balancing logic is not applicable there.

01:00:03.680 --> 01:00:05.520
This is difference number two.

01:00:06.140 --> 01:00:12.940
And then the last difference is if you create a deployment

01:00:12.940 --> 01:00:17.040
that great replica said that great part, three replicas.

01:00:17.380 --> 01:00:20.360
And in the specification, if you have some volumes,

01:00:22.500 --> 01:00:26.340
PVCs and if you have a volume from AzureDisk,

01:00:26.560 --> 01:00:31.560
so basically the same volume is what shared by all of these replicas.

01:00:31.820 --> 01:00:35.540
It's not that every part will get in different different volume.

01:00:35.680 --> 01:00:41.540
It's just one GB, one GB from the AzureDisk for all the three parts that it's using.

01:00:41.540 --> 01:00:47.080
But if you are going to have the similar logic for the stateful set,

01:00:47.660 --> 01:00:48.500
it makes no sense.

01:00:48.560 --> 01:00:53.760
Primary server, secondary servers is all going to do work on one single backend.

01:00:54.080 --> 01:00:55.820
Then why we need the replication logic?

01:00:55.900 --> 01:00:57.340
Where is the high availability stuff?

01:00:58.140 --> 01:01:00.500
Those things, everything doesn't make any sense here.

01:01:00.780 --> 01:01:06.020
So for the stateful set, if you create it, every instances will get a copy,

01:01:07.120 --> 01:01:10.160
which means every instances will get one GB of Azure,

01:01:10.320 --> 01:01:13.020
one GB of Azure, one GB of Azure, different different volumes.

01:01:14.660 --> 01:01:17.780
So which means while defining the stateful set,

01:01:18.740 --> 01:01:24.720
we won't give PVC instead we will give PVC template.

01:01:26.220 --> 01:01:31.200
With that template, every replicas will get a new volume created

01:01:31.200 --> 01:01:34.640
dynamically provision is going to be assigned here.

01:01:34.800 --> 01:01:36.240
Template, PVC template.

01:01:38.220 --> 01:01:45.500
Okay, so to list down ordered index for instances,

01:01:46.900 --> 01:01:52.260
ordered scale up and scale down, fixed identity, headless service,

01:01:53.860 --> 01:01:57.220
and then this one, PVC template.

01:01:57.480 --> 01:01:59.420
So these are all the differences.

01:02:00.380 --> 01:02:02.060
Requirements that you have implemented,

01:02:03.100 --> 01:02:06.120
and these are the behaviors of an stateful set.

01:02:06.400 --> 01:02:11.520
So with this, you can deploy your data-based solutions

01:02:11.520 --> 01:02:13.840
also into the Kubernetes cluster.

01:02:15.040 --> 01:02:16.360
Okay, it's still evolving.

01:02:16.460 --> 01:02:18.180
There are a lot of limitations with this,

01:02:18.320 --> 01:02:23.200
but we could see many production implementations for stateful set.

01:02:25.480 --> 01:02:28.580
Okay, if you want to manage stateful applications,

01:02:28.580 --> 01:02:31.960
then we will use the stateful set.

01:02:32.540 --> 01:02:36.280
It provides guarantee about the ordering and uniqueness of the party.

01:02:36.280 --> 01:02:37.720
It's not about for database.

01:02:38.260 --> 01:02:41.380
Database has those requirements, so we are doing deployment.

01:02:41.880 --> 01:02:43.160
For your application also,

01:02:43.920 --> 01:02:46.780
if you need some guarantee about the ordering and uniqueness,

01:02:47.000 --> 01:02:49.920
then you can deploy it as a stateful set.

01:02:50.360 --> 01:02:54.580
Okay, and then the use cases are when you will go for stateful set.

01:02:54.580 --> 01:02:59.040
If you are looking for a stable, unique network identifier,

01:03:00.200 --> 01:03:05.080
stable persistent storage, ordered graceful deployment and scaling,

01:03:05.640 --> 01:03:07.580
ordered automatic rolling updates.

01:03:07.940 --> 01:03:12.040
Okay, and example, kind stateful set,

01:03:12.360 --> 01:03:15.160
the meanings are same, three replicas,

01:03:16.300 --> 01:03:18.880
and look at here, volume claim templates,

01:03:21.040 --> 01:03:24.880
because every, all the three individual one by one,

01:03:24.880 --> 01:03:26.500
it's going to get their own volumes.

01:03:27.760 --> 01:03:32.240
And look at the service cluster database and then headless service.

01:03:33.160 --> 01:03:38.440
Okay, so maybe you can just try this one.

01:03:39.000 --> 01:03:40.660
Looks simple here.

01:03:41.200 --> 01:03:45.600
Create a stateful set and do get parts and see the way it is created.

01:03:45.660 --> 01:03:48.260
Try to delete one, see how it is getting created.

01:03:48.260 --> 01:03:51.860
Try to scale up, see whether the ordering is maintained.

01:03:52.600 --> 01:03:55.120
Try to call one just by its identity.

01:03:56.480 --> 01:04:00.120
Okay, just give it a try, please, on the stateful set.

01:04:00.920 --> 01:04:01.340
That's it.

01:04:01.340 --> 01:04:04.100
That's all I had to cover for stateful set,

01:04:04.220 --> 01:04:05.820
which is more than enough for the examination.

01:04:07.760 --> 01:04:09.700
Any questions on the stateful set?

01:04:11.940 --> 01:04:13.720
If no, please give it a try.

01:04:16.700 --> 01:04:18.100
So while I'm explaining,

01:04:18.160 --> 01:04:21.520
I would recommend you to have the QBanis documentation

01:04:21.520 --> 01:04:24.780
for the respective resources open in your screens.

01:04:24.840 --> 01:04:27.960
Parallel refer it, because during the examination,

01:04:28.100 --> 01:04:30.820
you are going to heavily rely on only this documentation,

01:04:31.200 --> 01:04:32.000
QBanis documentation.

01:04:32.840 --> 01:04:35.460
So you should know where to locate,

01:04:35.660 --> 01:04:37.620
where to find stuff, and all those things.

01:05:26.120 --> 01:05:27.460
For the stateful set,

01:05:27.580 --> 01:05:32.540
you can try the sample that is there in the QBanis documentation.

01:05:32.880 --> 01:05:35.340
Just copy this .ml file and then apply it

01:05:35.340 --> 01:05:37.060
and observe the behavior that will do.

01:05:44.960 --> 01:05:47.560
Thank you.

01:06:27.200 --> 01:06:28.720
Get SDS.

01:06:28.720 --> 01:06:33.360
Sorry if I forget to tell that comment.

01:06:33.540 --> 01:06:36.940
You can do QBanis, get stateful set,

01:06:36.940 --> 01:06:41.340
or get SDS to view the stateful set resources.

01:07:16.120 --> 01:07:19.140
0 out of 3 ready.

01:07:23.060 --> 01:07:25.680
Let me look into the screen.

01:07:30.100 --> 01:07:32.220
Which example do you use?

01:07:32.260 --> 01:07:34.500
What is the application you are deploying from the site?

01:07:36.120 --> 01:07:38.800
To get pods, QBanis will get pods.

01:07:39.380 --> 01:07:41.100
It will just spin one by one.

01:07:41.820 --> 01:07:43.740
Web 0 is in a pending state.

01:07:44.700 --> 01:07:46.560
Which means it is still trying to spin them.

01:07:46.640 --> 01:07:49.880
You can describe Web 0 and see the event section.

01:07:51.580 --> 01:07:52.460
Is that what you are trying?

01:07:54.580 --> 01:07:55.900
QBanis will describe.

01:08:00.660 --> 01:08:03.400
QBanis will describe pod Web 0.

01:08:04.260 --> 01:08:04.960
Let's see.

01:08:07.060 --> 01:08:09.540
This is about volumes.

01:08:09.780 --> 01:08:12.360
Can you look into your YAML file of your stateful set?

01:08:12.680 --> 01:08:14.680
It is not able to bind the volumes.

01:08:15.360 --> 01:08:19.060
Look at the volume claim template section.

01:08:21.400 --> 01:08:24.740
You may want to down down claim templates.

01:08:26.140 --> 01:08:29.360
Here you give storage class name as my storage class.

01:08:30.100 --> 01:08:32.380
But I don't think you have that storage class.

01:08:32.380 --> 01:08:35.200
So better delete the entire storage class attribute.

01:08:35.540 --> 01:08:39.680
So that the default standard storage provision will be used.

01:08:40.380 --> 01:08:41.780
What do you think?

01:08:42.960 --> 01:08:45.960
And yeah, 1GB.

01:08:46.560 --> 01:08:47.920
You are going to get 3GB.

01:08:48.040 --> 01:08:49.760
There you have 3GB minikube.

01:08:49.780 --> 01:08:50.700
That's the next question.

01:08:50.940 --> 01:08:52.380
Let's see.

01:08:53.420 --> 01:08:55.240
Delete and recreate it.

01:08:56.100 --> 01:08:56.480
OK.

01:08:56.660 --> 01:08:59.620
Again, the first pod itself failed scheduling.

01:08:59.620 --> 01:09:03.100
0 slash 1 pod as unbound.

01:09:03.220 --> 01:09:04.660
Immediate presence or decline.

01:09:04.920 --> 01:09:05.620
Preemption 0.

01:09:06.580 --> 01:09:07.680
Notes are available.

01:09:08.140 --> 01:09:11.700
One preemption is not helpful for scheduling.

01:09:11.840 --> 01:09:15.260
I think it's about the about the minikube.

01:09:16.140 --> 01:09:19.460
The resource available that we have.

01:09:20.640 --> 01:09:21.680
Red ones.

01:09:23.280 --> 01:09:27.380
It's trying to preempt the pods to accommodate these.

01:09:33.100 --> 01:09:33.700
OK.

01:09:33.700 --> 01:09:34.220
Go back.

01:09:34.240 --> 01:09:37.100
Go back to the error message.

01:09:41.620 --> 01:09:44.560
Preemption 0 slash notes are available.

01:09:45.900 --> 01:09:47.520
One preemption is not helpful for scheduling.

01:09:56.560 --> 01:09:58.480
One pod has unbound.

01:09:58.640 --> 01:09:59.680
Immediate persistent.

01:10:00.180 --> 01:10:02.500
Can you get PVC?

01:10:09.320 --> 01:10:09.960
OK.

01:10:10.160 --> 01:10:11.400
Delete that.

01:10:11.960 --> 01:10:13.500
You don't need to delete that.

01:10:13.740 --> 01:10:14.760
Storage class is not reflected.

01:10:14.900 --> 01:10:15.080
OK.

01:10:15.080 --> 01:10:15.840
Delete it.

01:10:15.840 --> 01:10:17.260
www web 0.

01:10:17.500 --> 01:10:19.320
Delete that PVC.

01:10:26.060 --> 01:10:27.200
Now apply.

01:10:27.640 --> 01:10:32.620
Delete your stateful set and apply again.

01:10:39.400 --> 01:10:41.000
Now it is bounded.

01:10:41.360 --> 01:10:42.040
It started.

01:10:42.440 --> 01:10:43.740
You can do get pods now.

01:10:43.740 --> 01:10:47.000
I think by the time web 1 and web 2 will already be spin up.

01:10:47.060 --> 01:10:48.340
Web 0 is running.

01:10:53.980 --> 01:10:55.580
How many replicas there?

01:10:55.640 --> 01:10:56.720
Three, right?

01:10:57.960 --> 01:10:59.220
Stateful set with three.

01:10:59.600 --> 01:11:00.860
So to get pods again.

01:11:02.480 --> 01:11:03.220
If it is running.

01:11:03.820 --> 01:11:06.800
Yeah, if it is running, it should have it should have spin up one more.

01:11:06.820 --> 01:11:08.000
Yeah, it's there.

01:11:08.180 --> 01:11:09.120
Web 1.

01:11:10.200 --> 01:11:10.680
Good job.

01:11:12.280 --> 01:11:13.760
I hope others.

01:11:16.840 --> 01:11:19.600
No, no, it's what it was not bound to previous deployment.

01:11:20.440 --> 01:11:23.660
It was referring a storage class name my storage class.

01:11:23.920 --> 01:11:26.540
But we don't have any storage class with that name.

01:11:26.880 --> 01:11:28.600
So PVC was unbound.

01:11:28.840 --> 01:11:30.240
It started bound to volume.

01:11:30.300 --> 01:11:32.160
So pod was not able to start up.

01:11:32.200 --> 01:11:36.320
So what we did is we modified our PVC claim template.

01:11:36.320 --> 01:11:41.280
To delete the storage class name so that it can use the standard storage class.

01:11:43.780 --> 01:11:47.080
So as part of SDS creation, it created a PVC.

01:11:48.360 --> 01:11:51.600
And that PVC we deleted and then recreated and that work.

01:11:52.900 --> 01:11:58.040
So volume must be mounted, bounded to a PVC so that your pod will start up.

01:11:58.340 --> 01:12:02.660
If volume is not bound to PVC, then your pod won't start.

01:12:02.860 --> 01:12:04.920
That's what happened in our case.

01:12:04.920 --> 01:12:06.680
Is that good?

01:12:07.880 --> 01:12:08.380
Okay.

01:12:09.600 --> 01:12:10.000
Okay.

01:12:10.000 --> 01:12:10.440
Okay.

01:12:10.560 --> 01:12:12.600
Now time is two thirty two.

01:12:12.640 --> 01:12:17.720
So let's take a quick 15 minutes coffee break and be back by two forty seven.

01:12:18.360 --> 01:12:18.980
Forty five.

01:12:19.220 --> 01:12:19.920
Let's say thank you.

01:12:20.980 --> 01:12:22.660
Thanks for listening so far.

01:12:47.580 --> 01:12:48.980
Okay.

01:13:04.920 --> 01:13:06.140
Thank you.

01:13:37.760 --> 01:13:39.160
Okay.

01:14:10.660 --> 01:14:10.720
Okay.

01:14:10.760 --> 01:14:10.780
Okay.

01:14:10.780 --> 01:14:13.480
Okay.

01:14:15.840 --> 01:14:16.280
Okay.

01:14:32.940 --> 01:14:34.340
Okay.

01:14:40.100 --> 01:14:41.500
Okay.

01:14:42.260 --> 01:14:43.660
Okay.

01:14:43.660 --> 01:14:43.780
Okay.

01:14:43.780 --> 01:14:44.220
Okay.

01:14:52.660 --> 01:14:54.060
Okay.

01:14:54.080 --> 01:14:54.240
Okay.

01:15:05.440 --> 01:15:06.840
Okay.

01:15:18.700 --> 01:15:19.820
I'm back.

01:15:23.540 --> 01:15:25.940
Please raise your hands in the teams.

01:15:27.240 --> 01:15:28.360
Just a quick check.

01:15:29.720 --> 01:15:31.040
Thank you.

01:15:33.980 --> 01:15:34.620
Perfect.

01:15:38.880 --> 01:15:39.520
Perfect.

01:15:44.000 --> 01:15:44.480
Okay.

01:15:49.200 --> 01:15:52.420
We are going to discuss our next object.

01:15:52.740 --> 01:15:56.360
Or maybe I will cover this just one or not two objects.

01:15:57.100 --> 01:15:58.520
I have two objects to cover.

01:15:58.520 --> 01:16:04.840
Let me cover the theory part of it because in some exams they ask about it.

01:16:04.840 --> 01:16:06.580
But it is not part of the curriculum.

01:16:08.060 --> 01:16:10.160
But I heard in a couple of exams they ask about it.

01:16:10.300 --> 01:16:16.080
So having an idea about what those resource and the use case for that will really be helpful for you guys.

01:16:16.140 --> 01:16:16.580
Right.

01:16:17.260 --> 01:16:22.540
So the next two resources we are going to talk about this jobs and cron jobs.

01:16:22.760 --> 01:16:24.140
Any idea on what is this?

01:16:24.140 --> 01:16:28.000
And you are already using these resources for your existing applications.

01:16:28.320 --> 01:16:28.920
Jobs.

01:16:36.820 --> 01:16:37.380
Okay.

01:16:38.440 --> 01:16:38.740
Okay.

01:16:38.740 --> 01:16:40.400
Let's look into the use case first.

01:16:41.660 --> 01:16:43.780
And then we can observe the behavior of this.

01:16:45.300 --> 01:16:45.740
Jobs.

01:16:46.960 --> 01:16:49.220
So applications that we've seen so far.

01:16:49.220 --> 01:16:54.200
The engine X application on the card application that we seen.

01:16:54.360 --> 01:16:58.180
Those are some kind of websites or web server.

01:16:59.440 --> 01:17:02.180
That's a long running process.

01:17:02.260 --> 01:17:06.260
It will be running throughout until you stop or delete the deployment.

01:17:06.720 --> 01:17:08.000
It will be running throughout.

01:17:08.440 --> 01:17:08.820
Right.

01:17:09.260 --> 01:17:18.220
But there are some applications of nature which are shortly shortly to one of task which means they have some set of steps to do.

01:17:18.220 --> 01:17:24.280
And once it's it's it's executed to its entirety then that's it.

01:17:24.280 --> 01:17:26.240
So to run.

01:17:26.880 --> 01:17:28.460
Applications of this nature.

01:17:29.620 --> 01:17:33.260
In your cluster we are going to use a resource called jobs resource.

01:17:33.460 --> 01:17:35.100
What is the behavioral difference?

01:17:35.120 --> 01:17:38.420
I can simply run this as an deployment or replicas.

01:17:38.980 --> 01:17:42.100
That will also execute the same logic isn't it.

01:17:42.560 --> 01:17:46.900
But the difference here is you have some set of activities to perform.

01:17:48.080 --> 01:17:51.220
And you start your application starts from here.

01:17:51.880 --> 01:17:55.200
And all the way it executes to its entirety.

01:17:56.040 --> 01:17:56.900
It's successfully completed.

01:17:58.020 --> 01:18:01.480
So at the end it will exit with some success code.

01:18:01.840 --> 01:18:03.400
Let's say zero as a success code.

01:18:04.700 --> 01:18:07.640
If your pod exits with an access code zero.

01:18:07.840 --> 01:18:10.000
I mean your application exists with an access code zero.

01:18:10.200 --> 01:18:10.320
That is.

01:18:11.100 --> 01:18:14.180
That means a successfully it's done its job.

01:18:14.940 --> 01:18:18.500
Then the pod status will be marked as uncompleted.

01:18:18.500 --> 01:18:19.820
Never seen that scenario.

01:18:20.180 --> 01:18:23.120
Having this looking at the status as uncompleted.

01:18:23.860 --> 01:18:28.420
But while executing something went wrong in step number nine or ten.

01:18:28.700 --> 01:18:31.520
And then the process exited.

01:18:31.980 --> 01:18:35.300
With some failure code other than zero let's say minus one.

01:18:36.140 --> 01:18:39.600
Then this is considered as an.

01:18:39.880 --> 01:18:42.620
A failure scenario what job will do is.

01:18:43.920 --> 01:18:46.920
I created a job that created one pod.

01:18:47.080 --> 01:18:49.020
And this is the pod executing now.

01:18:49.420 --> 01:18:52.620
So when I created it will maintain something like this.

01:18:53.280 --> 01:18:54.620
Zero slash one completions.

01:18:56.060 --> 01:18:58.360
Which means it expects for one successful completion.

01:18:59.040 --> 01:19:02.540
So if this happens happy path happens one slash one.

01:19:02.820 --> 01:19:04.420
Pod will be marked completed.

01:19:04.680 --> 01:19:06.200
Job will be marked completed.

01:19:06.860 --> 01:19:10.940
But if this happens then that pod will be resorted again.

01:19:10.940 --> 01:19:12.340
It's going to execute all the steps.

01:19:12.900 --> 01:19:15.800
Expecting it to complete our exit with access code zero.

01:19:15.840 --> 01:19:16.900
If it fails again.

01:19:16.920 --> 01:19:18.740
Again restart again restart.

01:19:19.860 --> 01:19:24.380
So there are some let's say some six tries it's going to do by default in an exponential fashion.

01:19:25.520 --> 01:19:29.160
Even after the six tries if it still fails.

01:19:30.340 --> 01:19:30.860
Then.

01:19:31.680 --> 01:19:34.100
The pod will be marked failed and the job will be marked.

01:19:34.320 --> 01:19:35.020
Which means.

01:19:35.900 --> 01:19:38.480
It never met its completion so job is failed.

01:19:39.120 --> 01:19:42.400
If on this retry somehow if it worked.

01:19:43.060 --> 01:19:46.220
Then it will be one slash one that is job is successful.

01:19:47.700 --> 01:19:48.100
OK.

01:19:49.120 --> 01:19:51.080
So that's that's the idea.

01:19:51.600 --> 01:19:54.740
Only if it is successfully exited then mark completed.

01:19:54.820 --> 01:19:58.600
If it is failed then try to restart it.

01:19:58.600 --> 01:20:00.460
Until you meet the count.

01:20:00.620 --> 01:20:00.920
OK.

01:20:01.120 --> 01:20:07.080
Let's say the nature of application that you are going to run here is some kind of.

01:20:09.680 --> 01:20:10.980
Database migration script.

01:20:11.040 --> 01:20:15.080
If you deploy the same as an replica set with replicas and one.

01:20:15.600 --> 01:20:16.740
The pod will run.

01:20:17.320 --> 01:20:19.920
It will run to its entirety and then it will exit.

01:20:19.940 --> 01:20:21.680
What replica set will do.

01:20:22.120 --> 01:20:22.840
When it exits.

01:20:24.880 --> 01:20:26.240
Its current state is zero.

01:20:26.780 --> 01:20:30.160
So it's immediately restarted or it will create a replacement.

01:20:31.120 --> 01:20:32.020
This will again exit.

01:20:32.620 --> 01:20:33.580
This will again execute.

01:20:34.120 --> 01:20:38.400
Which means you end up replicating the data again and again with the same data.

01:20:38.700 --> 01:20:40.000
So it's not a valid case here.

01:20:40.900 --> 01:20:48.600
Because no matter if it exists with the success code or failure code replica set will always restart it.

01:20:48.600 --> 01:20:51.220
Because it needs to have one copy running always.

01:20:52.160 --> 01:20:53.900
That's the nature of the replica set.

01:20:54.380 --> 01:20:56.220
But that is not the case with the job.

01:20:56.620 --> 01:20:58.600
If you create a job it will also create a pod.

01:20:59.640 --> 01:21:01.840
If this pod successfully completed.

01:21:02.920 --> 01:21:09.780
What mark successfully completion what mark successful completion from within the application it should exit with them.

01:21:10.180 --> 01:21:14.420
Successful then it will be mark completed job is completed.

01:21:14.900 --> 01:21:19.340
If it fails then restart only in the failure scenario it's going to restart.

01:21:20.340 --> 01:21:25.560
Restart or create a replacement until it meets the successful completion count.

01:21:26.560 --> 01:21:32.920
So which means you can actually set the completions count while creating the job.

01:21:34.100 --> 01:21:39.800
You can specify two properties one is how many completions you want.

01:21:40.680 --> 01:21:43.360
Let's say you want 10 completion successful completions.

01:21:43.360 --> 01:21:49.760
Let's say even build one key generated up key generated application.

01:21:50.360 --> 01:21:53.840
Every successful run will give you 10 keys.

01:21:53.840 --> 01:21:56.160
So you are looking for 100 keys.

01:21:56.160 --> 01:22:00.680
So you are looking for 10 completions 10 into 10 10 into 10 it's going to give you 100.

01:22:01.360 --> 01:22:05.940
So what job will do when you create it will be marked as something like this.

01:22:05.940 --> 01:22:10.980
I want 10 completions and the first part will run successfully completed.

01:22:10.980 --> 01:22:16.840
So 1 slash 10 second part will run 2 slash 3 3 slash 10 and then 10 slash 10.

01:22:17.760 --> 01:22:19.360
So it will go one by one.

01:22:19.560 --> 01:22:23.320
So if you want to speed up this process you can set parallelism.

01:22:25.400 --> 01:22:31.700
Parallelism to 5 which means if you get the job at the time you will have five parts running.

01:22:32.380 --> 01:22:35.220
So which means in the moment five will be marked completed.

01:22:35.220 --> 01:22:45.600
So 5 slash 10 in the next iteration another five will run or at any point in time you will have five parts running towards meeting this successful completion count.

01:22:45.820 --> 01:22:52.600
So by the time when you receive 10 completions parts 10 parts that are in completed state and job will be marked completed.

01:22:52.600 --> 01:23:00.780
One use case where we will use this job heavily is work queues to fetch items from and centralized work queue.

01:23:01.000 --> 01:23:05.420
Let's say you have a work queue multiple producers are producing messages to the queue.

01:23:06.960 --> 01:23:14.220
And let's say every day 1 a.m. you want to run a job that's going to read the messages from the queue and process it.

01:23:14.280 --> 01:23:18.540
So the consumer application right that you can run it as a job.

01:23:18.540 --> 01:23:23.820
It's I can run a job with three and here I can specify parallelism as a three.

01:23:25.580 --> 01:23:28.300
And completions it will vary isn't it.

01:23:28.340 --> 01:23:34.580
No matter today you have thousand messages yesterday you had 20000 day before yesterday you have only 50 messages.

01:23:34.680 --> 01:23:35.960
So completions we don't have.

01:23:36.420 --> 01:23:39.000
So if you set only parallelism.

01:23:40.180 --> 01:23:41.980
And leave the completion empty.

01:23:43.380 --> 01:23:45.000
That is the use case for work queue job.

01:23:45.000 --> 01:23:49.480
So which means this job will execute until the queue becomes empty.

01:23:49.700 --> 01:23:53.900
Once you become empty then everything will be more completed job will be more completed.

01:23:57.360 --> 01:24:01.960
OK. So basically I specified three scenarios for you under the job.

01:24:02.280 --> 01:24:06.620
First scenario is excuse me.

01:24:06.960 --> 01:24:11.480
First scenario is you are going to create a job that's going to create one part.

01:24:11.480 --> 01:24:17.000
So initially when you create it will be zero slash one and one completion is expected zeros.

01:24:17.180 --> 01:24:18.840
We didn't receive any completion.

01:24:20.140 --> 01:24:25.620
And after a while if it is successfully completed this will become one slash one and all good.

01:24:25.740 --> 01:24:28.260
This is marked as uncompleted.

01:24:28.480 --> 01:24:30.580
Shatters will be set to complete job is completed.

01:24:32.040 --> 01:24:38.000
That is one. Second is the same where we play with parallelism and completions.

01:24:38.000 --> 01:24:43.600
You have a fixed completion count to reach with this many parallel.

01:24:44.460 --> 01:24:50.140
And the third is you are going to create a job by giving only parallelism.

01:24:50.160 --> 01:24:57.380
No completions you will give. So this is the use case for centralized working on and centralized queue.

01:24:58.100 --> 01:25:01.760
So three kinds of three variants of job.

01:25:03.460 --> 01:25:05.900
OK. So later you can try this.

01:25:05.900 --> 01:25:10.740
I'm leaving it up to you to try the job and job is also going to create a part.

01:25:11.220 --> 01:25:14.160
As I mentioned you can see the status is getting completed.

01:25:15.780 --> 01:25:20.840
These are the three types that I mentioned non-parallel jobs which means only one part is started.

01:25:22.120 --> 01:25:25.220
And the job is completed as soon as that part is successful.

01:25:26.440 --> 01:25:30.860
Parallel job means you are going to specify the completions.

01:25:34.680 --> 01:25:38.320
And parallel jobs with what you means you don't specify completion.

01:25:38.320 --> 01:25:40.440
We will specify only parallelism.

01:25:44.700 --> 01:25:48.800
OK. There are some other concepts here that you can look at later.

01:25:49.000 --> 01:25:50.680
So once a job is completed.

01:25:52.340 --> 01:25:55.540
It will be there in the cluster for around let's say five minutes.

01:25:56.100 --> 01:25:59.480
And this is a this is also a configurable property time to leave.

01:25:59.480 --> 01:26:04.820
So if it is successfully completed it will be around for five minutes.

01:26:04.820 --> 01:26:09.980
And then it will be cleaned up automatically which means this will be deleted.

01:26:10.080 --> 01:26:12.980
Deleting the job will also delete all the parts that it created.

01:26:13.900 --> 01:26:16.980
OK. Any question on the job.

01:26:18.480 --> 01:26:21.560
Is it clear the use case and how it behaves.

01:26:23.000 --> 01:26:27.060
How it is different from the existing resource we discussed.

01:26:29.460 --> 01:26:30.620
Yes. No.

01:26:33.060 --> 01:26:35.040
If it is clear I will skip the hands on.

01:26:35.120 --> 01:26:38.640
If you want me to try I can show some hands on the job.

01:26:39.080 --> 01:26:40.440
OK. Thanks.

01:26:46.560 --> 01:26:47.000
OK.

01:26:51.900 --> 01:26:52.340
OK.

01:26:52.420 --> 01:26:55.380
OK. I think it from technically if it is clear it means all good.

01:26:55.680 --> 01:26:58.680
I have hands on on my file as well as a documentation.

01:26:58.920 --> 01:27:01.280
I'm leaving it up to you to look into it.

01:27:01.440 --> 01:27:06.040
The reason why I'm skipping the job part is because it's not part of CKE.

01:27:06.040 --> 01:27:10.840
But I heard from two participants that there was a question related to current job.

01:27:11.720 --> 01:27:16.520
So you need to first know what job means so that they will understand current job.

01:27:16.520 --> 01:27:18.060
So I covered that part right.

01:27:18.620 --> 01:27:19.920
That's why you try it.

01:27:19.920 --> 01:27:24.260
And if you have any questions or challenges we can discuss about that tomorrow.

01:27:24.560 --> 01:27:26.100
No problem. OK.

01:27:26.680 --> 01:27:29.460
So current job it's same as a job.

01:27:29.520 --> 01:27:31.220
But this comes with the schedule.

01:27:32.020 --> 01:27:37.520
If you are coming from an Linux or an expo we know the cron tabs right.

01:27:37.520 --> 01:27:42.180
Schedule something like every Monday everyone hard every once again.

01:27:43.560 --> 01:27:45.520
Something like that. So we will schedule.

01:27:45.780 --> 01:27:47.020
We want that to execute.

01:27:47.400 --> 01:27:52.360
We want we have a piece of logic that we want to execute every Monday morning 8 a.m.

01:27:53.000 --> 01:27:57.600
Right. The same way you can define a cron job and you can tell.

01:27:57.600 --> 01:28:02.000
Hey I want to run this cron job every 30 seconds.

01:28:03.120 --> 01:28:06.960
And the cron job specification you can actually provide a job specification.

01:28:07.260 --> 01:28:11.920
So which means after that is again cron job will create a job.

01:28:12.960 --> 01:28:18.820
And after that it will create another job after 30 seconds it will get that same another job.

01:28:19.660 --> 01:28:25.300
Another job which means these jobs are going to create the parts that's going to do that short lived activities.

01:28:25.900 --> 01:28:32.300
And more complete. So every 30 seconds you will have a job created by this.

01:28:33.740 --> 01:28:39.360
OK. It's like cron job is sitting on top of the job and then creating the jobs.

01:28:41.040 --> 01:28:45.500
Jobs are going to create the parts and that parts are going to execute some short lived activities.

01:28:46.580 --> 01:28:49.820
OK. So here you can see that we provide the job template.

01:28:50.620 --> 01:28:57.520
Kindness cron job job template and then the schedule which means every minute.

01:28:58.160 --> 01:28:59.980
So a job will be created every minute.

01:28:59.980 --> 01:29:03.260
That will run a container that will print hello from Gibbons.

01:29:04.420 --> 01:29:09.240
OK. It's something that comes with the schedule. It's going to create the job.

01:29:12.640 --> 01:29:16.240
OK. So it's even the same scenario that we discussed here.

01:29:16.780 --> 01:29:23.460
If you want to spin up a job to run every day 1 a.m. to finish the queue you can create a cron job.

01:29:23.520 --> 01:29:28.680
And specify the job specification so that every day 1 a.m. it will create the job object.

01:29:28.800 --> 01:29:32.600
That's going to read all the messages from the queue and process and then it will be more complete.

01:29:33.740 --> 01:29:38.000
So once a job is more completed there is nothing like restarting the same job.

01:29:38.000 --> 01:29:45.100
You need to spin up a new job to do to do the processing for new setup for items.

01:29:49.160 --> 01:29:52.300
OK. Job cron job.

01:29:53.300 --> 01:29:58.700
All right. So starting from the morning.

01:29:59.240 --> 01:30:05.200
The resources that we discussed is we first started with service and then we discussed several types within it.

01:30:05.200 --> 01:30:08.660
And then ingress. That is one resource that we discussed.

01:30:09.420 --> 01:30:18.400
And then we discussed on HPA, VPA, CA, all related to auto scaling.

01:30:18.980 --> 01:30:22.300
And then we discussed on config map, secrets.

01:30:24.380 --> 01:30:30.580
And then after the lunch break we started with PV, PVC and storage class.

01:30:31.780 --> 01:30:35.340
And then we discussed on DMIT set.

01:30:36.340 --> 01:30:38.620
And then and then on stateful set.

01:30:40.840 --> 01:30:45.860
And then we just completed job cron job. Good job guys.

01:30:46.080 --> 01:30:49.040
We discussed many resources today. Good job. Good job.

01:30:50.100 --> 01:30:56.780
So with this for momentarily we are going to stop on the resources perspective.

01:30:56.780 --> 01:31:05.360
Because this pretty much covered all the resources that we used to deploy the application and manage it.

01:31:05.860 --> 01:31:10.060
Right. So now we are going to focus a bit on a security gate.

01:31:10.140 --> 01:31:16.420
Let me repeat that. We are going to talk about a security gate because in your cluster you have three nodes.

01:31:16.900 --> 01:31:19.760
And in your master node you have many components running.

01:31:19.760 --> 01:31:27.600
And one component which is the important one is APS server that exposes your cluster as an API.

01:31:27.900 --> 01:31:32.260
And event, call and everybody is accessing through this only.

01:31:32.600 --> 01:31:40.040
Right. So if I zoom in this APS server part, let's say this is the APS server.

01:31:44.020 --> 01:31:53.360
Whatever the request that comes in to the APS server, let it be it come from event or call in or from me.

01:31:53.760 --> 01:31:59.040
All the requests goes through three security gates.

01:32:00.300 --> 01:32:02.040
Gate one, gate two and gate three.

01:32:02.340 --> 01:32:06.360
Three checks like how we pass through the airport.

01:32:07.100 --> 01:32:09.580
Security check and then custom check, blah, blah, blah.

01:32:09.760 --> 01:32:15.500
Similarly, if if if your request fails in one of the gate, you will be rejected.

01:32:15.580 --> 01:32:17.200
Your request won't be processed.

01:32:17.600 --> 01:32:20.460
So let's say the processing logic resides here.

01:32:21.300 --> 01:32:25.740
All within the APS server, all the three things happens within the APS server component only.

01:32:27.320 --> 01:32:31.280
So the first security gate is authentication.

01:32:31.280 --> 01:32:34.920
Whether he is then valid.

01:32:35.560 --> 01:32:37.240
Be the BMW employee.

01:32:38.140 --> 01:32:40.000
Second check is BMW or Benz.

01:32:40.160 --> 01:32:43.340
Sorry, I'm confused. I was looking for Benz.

01:32:43.500 --> 01:32:44.660
That's right. This is the BMW.

01:32:44.920 --> 01:32:45.840
Okay. Authentication.

01:32:46.560 --> 01:32:48.560
And the second gate is authorization.

01:32:49.460 --> 01:32:50.860
Let's say this event.

01:32:51.880 --> 01:32:54.720
And he is trying to.

01:32:54.780 --> 01:32:56.320
Perform some operation.

01:32:56.320 --> 01:32:57.320
Cube CTL.

01:32:58.020 --> 01:32:59.360
Get secrets.

01:33:00.520 --> 01:33:02.240
He's trying to view the secrets.

01:33:03.840 --> 01:33:05.020
His records goes here.

01:33:05.580 --> 01:33:11.520
And he's checked whether he's an employee of your organization.

01:33:11.960 --> 01:33:12.720
Credential supplied.

01:33:12.840 --> 01:33:13.440
All good.

01:33:13.720 --> 01:33:15.360
So he first passed us first gate.

01:33:16.100 --> 01:33:17.460
Second gate is authorization.

01:33:18.420 --> 01:33:24.740
Whether he is entitled to perform the gate operation on secrets.

01:33:24.740 --> 01:33:26.740
Whether he can view the secrets or not.

01:33:26.920 --> 01:33:28.760
Maybe only managers can view.

01:33:30.200 --> 01:33:33.880
Or only operations people can view not the developers.

01:33:33.980 --> 01:33:36.600
Maybe you have some kind of those checks.

01:33:36.920 --> 01:33:39.800
So that is that will happen at the authorization layer.

01:33:40.940 --> 01:33:47.520
And if he has those permissions, let's say he will move to the third layer that is admission controller.

01:33:49.480 --> 01:33:54.180
So as a cluster administrator, this is the layer where we can have more control.

01:33:54.840 --> 01:33:57.340
So we can do mutation.

01:33:59.020 --> 01:34:00.260
And validation.

01:34:01.100 --> 01:34:03.960
I'm going to give an example of what it means.

01:34:04.720 --> 01:34:04.840
Okay.

01:34:05.180 --> 01:34:05.960
Three gates.

01:34:06.560 --> 01:34:07.280
Authentication.

01:34:07.880 --> 01:34:08.600
Authorization.

01:34:08.900 --> 01:34:09.860
Admission controller.

01:34:10.300 --> 01:34:13.460
After successfully passing through all these three gates.

01:34:13.460 --> 01:34:18.900
The request will be considered for processing by that.

01:34:21.780 --> 01:34:22.880
By the cube.

01:34:23.000 --> 01:34:23.880
It's okay.

01:34:24.620 --> 01:34:29.200
So when it comes to authentication in the in Cuba Nets.

01:34:30.340 --> 01:34:34.180
We don't have any user management or group management.

01:34:34.220 --> 01:34:39.020
Those things are externalized, which means you can you may already have a system.

01:34:39.140 --> 01:34:41.200
You can take it and then integrate with the cube.

01:34:41.200 --> 01:34:43.940
And it's for example, your active directory.

01:34:44.980 --> 01:34:47.180
Or if you have a book.

01:34:48.800 --> 01:34:50.920
Or with the client certificates.

01:34:51.800 --> 01:34:52.400
Bootstrap.

01:34:52.640 --> 01:34:53.120
Token.

01:34:53.240 --> 01:34:54.180
Service account tokens.

01:34:54.420 --> 01:34:55.480
Open ID accounts.

01:34:56.660 --> 01:34:57.960
Are you aware?

01:34:57.960 --> 01:34:59.400
AWS IAM.

01:35:00.260 --> 01:35:00.940
Azure AD.

01:35:01.900 --> 01:35:07.060
So in Cuba Nets, we don't have the concept of managing the users, creating a new users, nothing like that.

01:35:07.060 --> 01:35:10.440
You have an existing system while bootstrapping the API server.

01:35:10.480 --> 01:35:14.440
You need to specify which authentication mechanism you are going to use.

01:35:15.760 --> 01:35:18.460
And then the configuration is related to that.

01:35:18.780 --> 01:35:22.100
So that whenever a request comes in, the API server will use that mechanism.

01:35:24.260 --> 01:35:28.980
And the authentication provider is going to give a response back to the API server.

01:35:29.820 --> 01:35:31.440
If the authentication is successful.

01:35:31.920 --> 01:35:34.480
If it is successful, some data about the user.

01:35:34.480 --> 01:35:37.320
Like the group that he is part of.

01:35:37.360 --> 01:35:38.320
So also details.

01:35:39.340 --> 01:35:44.280
OK, so we don't have much to discuss with respect to authentication because.

01:35:44.380 --> 01:35:48.940
All you already have, you're going to just integrate with the Cuba Nets.

01:35:49.680 --> 01:35:55.980
OK, so in our case, you integrated your organization active directory, let's say.

01:35:56.420 --> 01:35:58.340
And even is a valid user.

01:35:58.700 --> 01:36:01.260
So it will pass through the first gate successfully.

01:36:02.640 --> 01:36:07.160
And the second gate, the request goes to the second gate.

01:36:07.880 --> 01:36:10.040
That is the authorization gate authorization.

01:36:11.040 --> 01:36:19.180
So here in this gate, we are going to check whether even can perform get secret operation or not.

01:36:19.340 --> 01:36:20.260
In Cuba Nets.

01:36:21.600 --> 01:36:28.920
We will use a concept called all back role based access control based access control is just like this.

01:36:28.920 --> 01:36:33.740
So this came all the way from the open ship open ship.

01:36:33.740 --> 01:36:37.820
They contributed this concept to the Cuban Nets community.

01:36:38.100 --> 01:36:39.440
Role based access control.

01:36:40.100 --> 01:36:41.520
This is pretty straightforward.

01:36:42.460 --> 01:36:49.280
So I'm going to explain and we are going to do a quick hands on because you will see a couple of questions on the exam with our back.

01:36:49.600 --> 01:36:57.520
Just list down all the subjects or resources that we discussed and then the verbs that we discussed.

01:36:57.520 --> 01:37:01.520
For example, the verbs are get, create.

01:37:02.680 --> 01:37:02.860
Labs.

01:37:03.840 --> 01:37:04.120
Exact.

01:37:04.380 --> 01:37:04.980
Describe.

01:37:07.620 --> 01:37:09.300
What are the verbs that we discussed?

01:37:09.820 --> 01:37:10.420
Delete.

01:37:11.680 --> 01:37:13.120
List down all the verbs.

01:37:14.220 --> 01:37:15.760
List down all the resources.

01:37:16.760 --> 01:37:17.360
Parts.

01:37:17.500 --> 01:37:18.040
Secrets.

01:37:18.480 --> 01:37:19.320
Config map.

01:37:19.980 --> 01:37:20.580
Deployment.

01:37:21.020 --> 01:37:21.660
Replica set.

01:37:22.040 --> 01:37:24.080
List down all the resources that we discussed.

01:37:24.220 --> 01:37:24.860
That's it.

01:37:24.860 --> 01:37:28.100
With this, we are going to do a simple thing.

01:37:28.220 --> 01:37:30.260
What is that simple thing?

01:37:30.640 --> 01:37:34.180
First, we are going to create.

01:37:35.400 --> 01:37:36.520
A role.

01:37:37.660 --> 01:37:40.300
Rolled up a email kind as enroll.

01:37:42.440 --> 01:37:47.700
And under the specification, here you are going to give a name for your role.

01:37:48.180 --> 01:37:50.300
Let's say the name is.

01:37:50.680 --> 01:37:52.140
Part reader.

01:37:52.140 --> 01:37:58.100
And under the specification, you are going to tell a load verbs.

01:37:59.700 --> 01:38:00.420
Get.

01:38:01.620 --> 01:38:02.340
Describe.

01:38:03.040 --> 01:38:03.700
That's it.

01:38:04.100 --> 01:38:06.520
Two verbs on the resource.

01:38:07.300 --> 01:38:07.580
Part.

01:38:08.880 --> 01:38:20.260
So what it means is you created a role called part reader that if one has that role, he can perform only get and describe operation on the part.

01:38:20.260 --> 01:38:21.140
This is just the role.

01:38:21.140 --> 01:38:23.460
This role is not at us into event.

01:38:23.880 --> 01:38:25.560
Even our call it.

01:38:26.340 --> 01:38:26.900
Okay.

01:38:28.100 --> 01:38:29.860
Once you have the role defined.

01:38:30.900 --> 01:38:34.700
All you are going to do is create one more email file.

01:38:35.800 --> 01:38:36.820
Rolled by me.

01:38:36.920 --> 01:38:42.240
You are going to bind the troll to a specific user or to a group.

01:38:43.480 --> 01:38:45.760
Group information is also not maintained in the cube.

01:38:45.940 --> 01:38:48.740
The next authentication provided the group information.

01:38:48.740 --> 01:38:53.620
So you can also bind the role to a specific group.

01:38:54.100 --> 01:38:55.200
The user is part of the group.

01:38:55.640 --> 01:38:57.360
Then the troll will get affected.

01:38:58.440 --> 01:39:01.700
So same kind as enroll binding.

01:39:03.360 --> 01:39:07.100
Under the specification, you will do something like a.

01:39:07.340 --> 01:39:10.420
Apply to user event role.

01:39:11.420 --> 01:39:12.200
That can be user.

01:39:12.380 --> 01:39:13.080
It can be group.

01:39:13.080 --> 01:39:14.520
It's a event.

01:39:14.600 --> 01:39:20.620
So which means this is the one that actually binds this role to the user called event.

01:39:21.060 --> 01:39:25.540
So which means even can perform only get and describe on the part.

01:39:26.800 --> 01:39:30.820
If you try to delete the deployment or do anything, then you won't be able to do it.

01:39:30.860 --> 01:39:36.300
Is the request will simply be rejected because he is not authorized to do that.

01:39:39.500 --> 01:39:40.060
Okay.

01:39:40.060 --> 01:39:42.580
That's how we do the role based access control.

01:39:42.620 --> 01:39:49.580
So this role and role binding applies at namespace level.

01:39:49.920 --> 01:39:51.660
Only the namespace where you are creating.

01:39:52.300 --> 01:39:59.760
But if you want to create something that should appear at an entire cluster level, then it is cluster role.

01:40:00.700 --> 01:40:05.700
And cluster roll by which means this role and roll binding applies at the end.

01:40:05.700 --> 01:40:06.240
Cluster.

01:40:09.480 --> 01:40:11.820
Roll or bag.

01:40:15.200 --> 01:40:16.080
Okay.

01:40:16.240 --> 01:40:18.700
One sample or bag roll.

01:40:19.280 --> 01:40:21.480
For reader part.

01:40:22.500 --> 01:40:24.160
Perform only get watch and list.

01:40:26.820 --> 01:40:29.140
And another example for cluster role.

01:40:29.140 --> 01:40:34.500
The namescape namescope is skipped here because it's it's at the cluster level.

01:40:35.000 --> 01:40:38.240
Secrets secret reader secret get watch list.

01:40:38.240 --> 01:40:39.440
Only these three operations.

01:40:40.000 --> 01:40:45.620
And if I want to bind that role to an user, then roll binding.

01:40:47.100 --> 01:40:48.300
User event.

01:40:49.060 --> 01:40:50.700
Roll reference part reader.

01:40:51.140 --> 01:40:53.240
The same goes for cluster role binding also.

01:40:53.340 --> 01:40:54.200
Nothing difference.

01:40:54.200 --> 01:41:00.600
So here we are binding it to a group with the name manager instead of user event is group manager.

01:41:05.500 --> 01:41:06.140
Okay.

01:41:06.840 --> 01:41:09.500
So that's it about the role based access control.

01:41:09.540 --> 01:41:18.960
So in our security gate based on the or bag that even has let's say he is allowed to pass because he has the permission.

01:41:18.960 --> 01:41:21.960
So his request will go to the.

01:41:22.500 --> 01:41:25.100
Pass the second gate and then it will go to the third gate.

01:41:25.120 --> 01:41:26.780
That is admission control.

01:41:29.920 --> 01:41:31.000
Admission control.

01:41:31.240 --> 01:41:36.260
So which means this is just a configuration of the APA server itself.

01:41:37.440 --> 01:41:44.600
So by default, kubernetes provides some 30 plus admission controllers.

01:41:46.260 --> 01:42:01.460
30 plus admission controllers, which means if you aren't properly enabling set of admission controllers, it means your APA server is not properly configured.

01:42:02.380 --> 01:42:07.760
So you need to enable some set of admission controller or your APA server to function properly.

01:42:08.100 --> 01:42:11.460
So it's about there are many features that are just disabled.

01:42:12.460 --> 01:42:16.340
And you need to enable those admission controllers if you want it.

01:42:16.520 --> 01:42:33.280
For example, if even is submitting and request in part specification and in the part specification, he don't he didn't specify resource request and limits.

01:42:34.620 --> 01:42:38.080
He just submitting it without resource request and limits in it.

01:42:38.080 --> 01:42:42.040
So at the admission control layer level, we have an admission control.

01:42:42.140 --> 01:42:47.500
If you enable it, that will simply reject the request, saying, hey, include the resource request and limits.

01:42:47.500 --> 01:42:49.160
I can't admit you.

01:42:49.380 --> 01:42:56.120
I don't give admission for you in my cluster because you are not giving this required details.

01:42:56.220 --> 01:43:02.320
OK, so you can do validation like this if it is found if not rejected.

01:43:02.320 --> 01:43:11.260
In some cases, it can it will also mutate, which means assume he submitted and request for them.

01:43:11.340 --> 01:43:15.220
But it doesn't have any namespace information in it.

01:43:15.900 --> 01:43:22.600
So at the admission control layer level, your admission control can manipulate your request.

01:43:23.360 --> 01:43:28.040
It will just include namespace as default.

01:43:29.740 --> 01:43:31.700
And then it will be considered for processing.

01:43:31.700 --> 01:43:34.160
So mutation also will happen.

01:43:34.940 --> 01:43:43.440
So what you submit is not what you are going to see, because in between an administrator can mutate it at the admission control level.

01:43:45.700 --> 01:43:47.320
Let me give you one valid example here.

01:43:47.440 --> 01:43:56.700
Let's say multiple teams are submitting applications to your cluster and you decided to run a sidecar along with every application.

01:43:56.700 --> 01:44:02.460
And this is going to do some kind of helpful things for you to manage the cluster.

01:44:03.720 --> 01:44:08.960
So when they submit the application team, they submit a specification with only their container definition.

01:44:09.520 --> 01:44:17.280
But here in the admission control layer level, you can include one more container in their specification and then submit for the processing.

01:44:17.800 --> 01:44:21.020
OK, so as administrator, you have more control of this.

01:44:21.720 --> 01:44:24.080
You can do validation or mutation.

01:44:24.080 --> 01:44:28.760
So if you look at the documentation, there are many.

01:44:30.400 --> 01:44:31.440
Mission controllers.

01:44:32.780 --> 01:44:34.260
I think that is valid for you.

01:44:34.260 --> 01:44:36.080
You can simply enable it.

01:44:37.320 --> 01:44:38.340
They pay several levels.

01:44:38.360 --> 01:44:39.420
So this is how it is.

01:44:39.440 --> 01:44:42.980
So we will enable or disable the name of that mission controllers.

01:44:43.500 --> 01:44:46.080
If you want, you can also write your own admission controller.

01:44:46.620 --> 01:44:48.260
OK, let's say.

01:44:53.300 --> 01:44:55.960
You are you observed on behavior, right?

01:44:56.180 --> 01:45:04.340
You submitted a PVC without a storage class name and by default, the standard default storage class.

01:45:04.340 --> 01:45:12.720
Big did that behavior happens because in your cluster default storage class, administrator, administrator, admission controller is enabled.

01:45:13.220 --> 01:45:15.480
If you disable it, then nothing will happen.

01:45:15.480 --> 01:45:16.480
If you submit it.

01:45:17.920 --> 01:45:18.620
I think will happen.

01:45:18.820 --> 01:45:20.600
It will mean no dynamic provision will happen.

01:45:20.620 --> 01:45:22.300
Nothing will get assigned to your PC.

01:45:23.220 --> 01:45:25.820
So it's the AP server functionality itself.

01:45:26.440 --> 01:45:32.420
So you need to enable right set of administration admission controllers for your cluster to behave properly.

01:45:33.440 --> 01:45:35.560
OK, you can refer this documentation.

01:45:35.720 --> 01:45:39.900
They have definition for every every.

01:45:41.360 --> 01:45:43.900
Admission controller, for example, names as exist.

01:45:43.960 --> 01:45:45.260
This is an admission controller.

01:45:45.720 --> 01:45:49.660
It will check all requests on namespace resources other than namespace itself.

01:45:50.400 --> 01:45:54.860
If the namespace reference from a resource doesn't exist, then the request is rejected.

01:45:56.360 --> 01:46:06.620
So if you are trying to submit something with the namespace that doesn't exist, then if this is in place, this will simply this admission control is enabled.

01:46:06.960 --> 01:46:08.840
This will this will detect your request.

01:46:08.840 --> 01:46:12.040
OK, it's in the documentation.

01:46:12.220 --> 01:46:14.180
You can search for it.

01:46:14.880 --> 01:46:15.840
So three gates.

01:46:18.020 --> 01:46:21.420
It is admission, authentication, authorization, admission controller.

01:46:22.160 --> 01:46:28.660
But what is more important for you from the exam perspective is our back role based access control.

01:46:28.920 --> 01:46:34.820
So now we are going to do one quick hands on for our back.

01:46:36.160 --> 01:46:38.820
That's two to three questions you can expect from our back.

01:46:40.940 --> 01:46:43.200
You can give some example from documentation.

01:46:44.760 --> 01:46:45.700
Our back.

01:46:47.780 --> 01:46:52.060
OK, let's go with this bitnames documentation, something that we can trust.

01:46:53.340 --> 01:46:53.440
Mm hmm.

01:46:54.180 --> 01:46:55.880
OK, let's try this one.

01:46:57.120 --> 01:46:59.180
Let me include this in the world.

01:46:59.180 --> 01:47:00.900
Give this you are.

01:47:04.960 --> 01:47:07.560
First of all, and we are in the minikub.

01:47:07.620 --> 01:47:08.660
Don't forget that.

01:47:08.800 --> 01:47:18.640
So in the minikub, we need to just stop your minikub minikub stop and then start it again with our back enabled.

01:47:21.680 --> 01:47:28.100
Enabled our back in the minikub and directly go to this use case one section use case one.

01:47:29.760 --> 01:47:32.760
Just try only use case one.

01:47:33.860 --> 01:47:36.860
That page has many use cases, but I only use case one.

01:47:37.260 --> 01:47:40.180
So what you're going to do, you are going to create a user.

01:47:42.980 --> 01:47:43.500
Employee.

01:47:43.640 --> 01:47:46.460
And he's going to be part of a group bignomy.

01:47:48.460 --> 01:47:54.300
And you're going to add necessary or back policies so that the user named employee.

01:47:56.380 --> 01:48:02.140
Can manage deployments only inside a specific namespace.

01:48:03.040 --> 01:48:04.480
Office namespace called office.

01:48:04.700 --> 01:48:09.780
If you have another namespace in proc, he won't be able to do anything in that namespace.

01:48:10.520 --> 01:48:19.700
So far that you are going to create a user, create a namespace, create a role, create a role binding are going to do it all.

01:48:19.700 --> 01:48:30.440
It has multiple steps, getting the namespace, creating the user credentials, creating the role, creating the role binding and then finally verifying it.

01:48:31.080 --> 01:48:33.340
Okay, some sort of 10 comments or so.

01:48:34.400 --> 01:48:38.440
Let's take some five to 10 minutes and give this a try.

01:48:38.440 --> 01:48:40.000
All based access control.

01:48:41.900 --> 01:48:42.460
Go ahead, go ahead.

01:48:42.560 --> 01:48:43.880
The last topic of the day.

01:48:44.140 --> 01:48:46.720
Let's see who is going to get it working first.

01:48:47.520 --> 01:48:55.780
If you are already completed, just let me know so that you can share your learnings with others.

01:49:12.240 --> 01:49:14.500
Okay, is that any error you are seeing?

01:49:15.000 --> 01:49:17.540
Is that an error that you are facing?

01:49:17.900 --> 01:49:18.800
Who is this by the way?

01:49:18.820 --> 01:49:20.780
So that you can I can look into a screen.

01:49:22.800 --> 01:49:22.980
Shahid.

01:49:26.480 --> 01:49:28.780
Okay, let me check.

01:49:37.340 --> 01:49:38.780
Thanks for that.

01:49:55.780 --> 01:49:56.540
Okay.

01:50:40.020 --> 01:50:41.420
Okay.

01:51:12.360 --> 01:51:13.760
Okay.

01:51:39.820 --> 01:51:41.220
Okay.

01:52:09.640 --> 01:52:11.040
Okay.

01:52:39.500 --> 01:52:40.900
Okay.

01:53:02.940 --> 01:53:03.620
Okay.

01:53:03.620 --> 01:53:04.120
Yes, please.

01:53:07.700 --> 01:53:08.120
Okay, okay.

01:53:08.240 --> 01:53:09.020
It's in.

01:53:09.320 --> 01:53:09.620
Yeah, yeah.

01:53:09.760 --> 01:53:12.460
Just modify the version.

01:53:12.600 --> 01:53:15.380
Look into the documentation and update this one.

01:53:16.800 --> 01:53:24.320
Maybe from from the version beta one, maybe the by the time when they documented was in version beta one.

01:53:24.320 --> 01:53:27.360
I think right now it will be a different version.

01:53:27.400 --> 01:53:30.840
Let me check the version from the documentation for the roll.

01:53:31.180 --> 01:53:31.380
Right.

01:53:31.420 --> 01:53:32.320
Roller roll value.

01:53:37.360 --> 01:53:40.680
It's version one are back authorization q8s.

01:53:40.840 --> 01:53:44.420
I will case that I will slash be one more beta.

01:53:47.600 --> 01:53:48.200
Okay.

01:54:07.320 --> 01:54:08.720
Okay.

01:54:27.900 --> 01:54:28.540
Perfect.

01:54:28.860 --> 01:54:29.100
Perfect.

01:54:29.100 --> 01:54:31.280
You completed that.

01:54:36.040 --> 01:54:36.080
Perfect.

01:54:36.080 --> 01:54:36.700
Perfect.

01:54:36.700 --> 01:54:37.580
Good job.

01:54:39.680 --> 01:54:46.280
Would you like to share your learnings with everyone quickly on what you find if they are comfortable and okay.

01:54:54.320 --> 01:54:55.560
Okay.

01:55:16.320 --> 01:55:17.060
Perfect.

01:55:18.180 --> 01:55:18.260
Perfect.

01:55:18.420 --> 01:55:24.280
All right.

01:55:24.660 --> 01:55:26.360
Last two minutes.

01:55:26.960 --> 01:55:28.120
But don't get this activity.

01:56:01.620 --> 01:56:03.020
Okay.

01:56:31.760 --> 01:56:33.160
Okay.

01:57:09.020 --> 01:57:09.820
Okay.

01:57:09.820 --> 01:57:11.580
Okay, time for this activity.

01:57:22.020 --> 01:57:26.740
For this, let's cover last a little more theory part.

01:57:26.960 --> 01:57:28.440
And then we are all set to end the training.

01:57:28.680 --> 01:57:33.060
We already did some kind of high level review on the resources that we covered.

01:57:34.320 --> 01:57:34.520
Right.

01:57:34.560 --> 01:57:36.920
So we sing this HP, VP, so on.

01:57:37.080 --> 01:57:38.360
So I'm now put this list.

01:57:38.680 --> 01:57:41.260
We just discussed about our back also.

01:57:41.980 --> 01:57:42.440
Okay.

01:57:42.440 --> 01:57:48.440
So there is last one last thing that I would like to discuss and then we can end the training.

01:57:49.600 --> 01:57:50.120
Is that okay?

01:57:50.200 --> 01:57:51.860
Are you you guys already feel?

01:57:52.960 --> 01:57:54.560
It's already a lot for day two.

01:57:54.620 --> 01:57:56.440
What's your feedback?

01:57:57.720 --> 01:58:01.060
Shall we end or can we make use of the next 15 minutes?

01:58:01.340 --> 01:58:02.440
I'm leaving it up to you.

01:58:11.100 --> 01:58:11.840
Okay.

01:58:15.100 --> 01:58:15.840
Okay.

01:58:19.300 --> 01:58:19.460
Okay.

01:58:19.460 --> 01:58:24.980
So by any chance, you are using Helm for packaging your applications.

01:58:25.840 --> 01:58:26.580
Help.

01:58:30.480 --> 01:58:30.500
Okay.

01:58:31.340 --> 01:58:31.480
Okay.

01:58:32.060 --> 01:58:32.380
Okay.

01:58:32.380 --> 01:58:32.420
Okay.

01:58:32.420 --> 01:58:34.960
Everybody in the group already know how to use Helm.

01:58:35.620 --> 01:58:35.640
Right.

01:58:35.800 --> 01:58:36.380
It's my understanding.

01:58:36.880 --> 01:58:37.200
Right.

01:58:39.560 --> 01:58:39.880
Okay.

01:58:40.040 --> 01:58:40.180
Okay.

01:58:40.200 --> 01:58:40.600
No problem.

01:58:40.780 --> 01:58:44.140
I'm just asking for the purpose of examination.

01:58:44.360 --> 01:58:45.180
What do you need to know?

01:58:45.180 --> 01:58:46.120
I will just cover it.

01:58:46.120 --> 01:58:50.540
It may be basics for most of you, but can be under pressure also.

01:58:50.540 --> 01:58:50.840
Right.

01:58:50.900 --> 01:58:59.820
So if I want to just deploy one single service to the Kubernetes, I need to create multiple YAML files.

01:58:59.820 --> 01:59:06.900
So based on the discussions that we had on day one and day two, first you need to create an YAML file for deployment.

01:59:07.940 --> 01:59:11.760
You need to create one YAML file for a service.

01:59:12.940 --> 01:59:14.380
I'll share my screen.

01:59:14.720 --> 01:59:16.040
I hope you are seeing it.

01:59:16.160 --> 01:59:19.480
Deployment and then service and ingress.

01:59:19.740 --> 01:59:22.180
You need to create one YAML.

01:59:22.180 --> 01:59:31.180
And if it is using configuration or config map for the secrets.

01:59:32.100 --> 01:59:32.440
Right.

01:59:32.740 --> 01:59:37.160
So at a minimum, you need to create this many resources.

01:59:39.180 --> 01:59:45.000
And if this is for version one point one, if you're going to deploy version one point two.

01:59:46.000 --> 01:59:49.020
It is using PV and PVC.

01:59:49.160 --> 01:59:50.460
We need to create that.

01:59:51.340 --> 01:59:52.380
It is one point two.

01:59:52.600 --> 01:59:55.460
Maybe changes is not required in all the YAML files.

01:59:55.640 --> 01:59:58.100
Maybe at least it will require here.

01:59:58.400 --> 02:00:02.780
And based on some scenarios, it may require somewhere in the config also.

02:00:03.040 --> 02:00:03.380
Correct.

02:00:04.040 --> 02:00:08.680
So for one single microservice, you need to maintain this many artifacts on different portions.

02:00:08.920 --> 02:00:11.480
So think of maintaining some 20 microservices.

02:00:11.480 --> 02:00:16.400
So this will become already an headache for you.

02:00:16.640 --> 02:00:16.740
Right.

02:00:17.780 --> 02:00:25.460
So we use tools like, for example, you might have used package manager like APT or APTK.

02:00:26.180 --> 02:00:31.740
And then you simply give APTK install and they give some tool name.

02:00:32.300 --> 02:00:35.800
And immediately all of the required things will get installed.

02:00:35.820 --> 02:00:37.620
And you can directly start using the tool.

02:00:38.620 --> 02:00:45.720
But in my case, if I ask you to install this application YAML in the Kubernetes,

02:00:46.580 --> 02:00:55.220
then first you need to create CMN secrets and then PVPVC and then go to deployment and create service in this.

02:00:57.000 --> 02:01:04.620
This can be simplified so that I can do something as simple as this install YAML.

02:01:04.820 --> 02:01:10.620
That should take care of deploying all of these artifacts and the application should be up and running straight away.

02:01:11.080 --> 02:01:13.560
So that's where Helm shines.

02:01:14.720 --> 02:01:22.180
So you are going to package your Kubernetes application as Helm charts.

02:01:22.500 --> 02:01:26.440
So if I package the output that I'm going to get is a chart.

02:01:27.460 --> 02:01:37.440
And this chart will be maintained in the chart repository where all the application charts will be maintained.

02:01:38.360 --> 02:01:44.880
So if I execute a command like Helm install YAML, then it's going to download the chart from there.

02:01:44.880 --> 02:01:47.040
And then it's going to install all of these components.

02:01:47.740 --> 02:01:50.400
I've set up all of these components in the Kubernetes.

02:01:59.880 --> 02:02:05.720
So basically Helm is a package manager for Kubernetes.

02:02:08.360 --> 02:02:15.780
So what we need to do in examination, you will be asked to quickly create a chart for your application.

02:02:17.440 --> 02:02:21.180
And we are going to try the commands tomorrow anyway.

02:02:22.320 --> 02:02:25.140
So what we need to do, there is a proper structure.

02:02:25.500 --> 02:02:32.460
We will create a folder called template and then move all of these files.

02:02:34.780 --> 02:02:37.580
Move all of the files and open file by file.

02:02:38.320 --> 02:02:43.260
And you are going to templateize each one of it, which means you are going to open this file.

02:02:43.260 --> 02:02:48.920
Right now you are seeing an attribute like nginx 1.2.

02:02:50.180 --> 02:02:51.700
You are going to you will template this.

02:02:52.000 --> 02:02:56.260
You need to move this, whatever the component that will change.

02:03:04.780 --> 02:03:09.200
So similarly in every file you are going to open and templateize it all.

02:03:09.200 --> 02:03:14.140
Which means at the end what you all have is a template only.

02:03:14.520 --> 02:03:19.420
So in the same directory as template, you will have a values.aml.

02:03:19.720 --> 02:03:22.140
So this is the one that's going to have all the values.

02:03:22.200 --> 02:03:31.200
For example, in the values.aml, you will see image, node, and this will have the nginx 1.2.

02:03:31.580 --> 02:03:35.600
So whatever the values here, it's going to get populated into these templates.

02:03:36.320 --> 02:03:42.980
And also we will have chart.aml that will have some information about the version and the name of the application.

02:03:43.740 --> 02:03:47.220
In addition to that, we will have some other couple of other things as well.

02:03:48.160 --> 02:03:58.520
So once you have this proper folder structure, let's say you are going to execute Helm package command against this.

02:03:58.640 --> 02:04:00.740
Then this is going to give you the chart.

02:04:01.940 --> 02:04:05.000
This is the one that's already made in the registry repository.

02:04:05.620 --> 02:04:09.900
And this is the chart that you are going to search for it and then install it in the registry.

02:04:13.620 --> 02:04:16.140
So Helm altogether is a different tool.

02:04:16.300 --> 02:04:18.540
It's not part of the core kubriss distribution.

02:04:19.120 --> 02:04:23.740
You can refer the site later, but tomorrow we will do a simple example also to package it.

02:04:23.740 --> 02:04:30.320
In the exam, they would expect us to package Helm chart from the setup artifacts.

02:04:31.420 --> 02:04:35.180
So we should know the basic commands. As long as we know that, then we are all good.

02:04:39.060 --> 02:04:43.360
I think it's enough for the Helm high level overview.

02:04:46.820 --> 02:04:53.800
Okay, that's it. We successfully completed day two.

02:04:55.860 --> 02:04:58.640
And I hope you guys learned a lot of things today.

02:04:59.040 --> 02:05:04.360
Do you have any special specific specific feedback that you want me to correct for day three?

02:05:05.260 --> 02:05:08.760
How about the base in day two? Any specific feedbacks?

02:05:08.840 --> 02:05:10.280
Did you guys enjoy day two?

02:05:13.360 --> 02:05:16.140
Thank you.

02:05:17.720 --> 02:05:18.800
Welcome again.

02:05:45.420 --> 02:05:47.300
Good, good. Thanks, darling.

02:05:48.220 --> 02:05:51.220
Glad to hear that feedback and franchise.

02:05:53.960 --> 02:05:56.360
Thank you. And Colin.

02:06:35.000 --> 02:06:35.480
Okay.

02:06:35.580 --> 02:06:38.500
So this was the question that you asked about the morning, right?

02:06:38.700 --> 02:06:40.300
About dealing with the certificates.

02:06:40.300 --> 02:06:40.880
Am I right?

02:06:42.680 --> 02:06:44.340
Okay, okay.

02:06:44.580 --> 02:06:45.720
Okay, okay. This is noted.

02:06:46.560 --> 02:06:49.580
I will say I will check if I can share something with you tomorrow.

02:06:50.320 --> 02:06:53.260
Thank you, Colin. And over to you, Shahid.

02:07:11.480 --> 02:07:13.400
Perfect. Perfect. Thank you.

02:07:13.540 --> 02:07:14.640
And then Shalangan.

02:07:21.620 --> 02:07:24.020
Yep. Over to you, Shalangan. Yep.

02:07:30.660 --> 02:07:32.720
Thank you. Thank you so much.

02:07:32.840 --> 02:07:38.000
I'm really glad that this day really helped you guys and keep that excitement.

02:07:38.000 --> 02:07:41.220
Tomorrow it's going to be full of activity days.

02:07:41.300 --> 02:07:47.300
I will explain stuff, but you will have more activity to do more from an exam perspective.

02:07:47.300 --> 02:07:50.340
So we will go through the CKA curriculum line by line.

02:07:50.580 --> 02:07:53.340
And we are going to try hands on for that.

02:07:53.480 --> 02:07:57.640
Okay. So get up for tomorrow and enjoy your rest of the evening.

02:07:57.720 --> 02:07:59.600
Thank you.

02:08:01.400 --> 02:08:01.680
Bye.