1:21
2022-10-26 10:40:47
48:29
2022-10-26 11:22:27
1:32
2022-10-26 14:03:58
22:02
2022-10-26 14:09:12
18:32
2022-10-26 14:38:39
1:26:59
2022-10-27 09:43:27
2:08:07
2022-10-27 13:00:05
5:19
2022-10-28 09:57:45
Visit the Certified Kubernetes Administrator Exam Preparation course recordings page
United Arab Emirates - Certified Kubernetes Administrator (CKA) - exam preparation
WEBVTT--> Hello, welcome back. --> So let's wait for a few more minutes. --> Alright, I think we got almost everyone. We can start. --> So the last topic that we discussed before our lunch break is about config map and secrets. --> What it all takes to inject config map as environment variables, as files, so those things we discussed in detail. --> Correct. So with that, that topic comes to an end. --> Now we are going to enter into a new topic. --> Here in this topic, we are going to discuss about three resources. --> Persistent volume, persistent volume claim, storage classes or SC in short. --> PV, PVC and SC. So these are all the three resources we are going to discuss and then we are going to do a quick hands on on it. --> OK, so if you still remember. --> To define a volume in a pod specification. --> What we do? --> Two step process we tried one. --> We define a volume. --> We give something like name of your volume and then the storage provider. --> Let's say if it is a host path or NFS. --> And then the specification related to that, right? --> So this is about creating the volume and within the container specification. --> We will have something called volume mounts and here we specify name of the volume. --> And then the path within the container. --> Where you want to mount that will give it to the mount part, isn't it? --> So this is step number one and this is step number two. --> So if you look at here. --> Defining the volume. --> Are creating the volume is defined in the pod specification. --> This is in pod specification. --> Wherein we put that information. --> If tomorrow if you want to change to a different. --> From NFS you want to change to host path or to an Azure disk or to NGC persistent disk. --> Then you need to keep changing here. --> Which means even though nothing get affected in the pod specification. --> You need to deploy this ML again and again for this to take effect, right? --> So which means pod specification and then the volume specification. --> Is tightly coupled changes in one will deploy changes deployment for both. --> And also there is no way we can directly manage this mounted volumes. --> Is there a way I can view get volumes and see these volumes and all right. --> So to make it loosely couple and to better manage the volumes. --> Kubernetes resource came up with two new resources called persistent volume and persistent volume claim. --> So the idea here is. --> Okay, let's not have this definition here. --> Remaining still still say the same. --> Remove the volume specification from the. --> Pod specification. --> Okay, and then for the volumes. --> Let's create an email file. --> Let the administrator create one email file with kindness and persistent volume. --> And here it's going to have the specification and here it's going to tell it's an NFS. --> And this is the server details and within which they export folder. --> Okay, so your system administrator. --> Will create an persistent volume email file like this and he will submit it to the cluster. --> So which means after submitting if he does keeps it till PV get PV can actually see the volume created a volume with the name. --> Let's say. --> My back and anything name doesn't matter here so in a volume created here. --> So if you submit a volume will be created and in the result you can see that the volume is unbound which means a volume created nobody is using this volume yet. --> It's not bound to any part yet it's unbound it will be an unbound state. --> Okay, so what your system administrator will do is similarly he will create multiple PVs. --> Maybe one with the Azure this one with the GC this one with the host but maybe he will create multiple. --> PVs so which means at the end what they have is a pool of persistent volumes. --> He basically statically provision the volumes and now in the kubernetes he has pool of. --> PVs few are allocated out of you are not allocated to any part you have those PVs. --> Now the next step is let's say your application here it needs a volume. --> And you can have some specification that let's say if you want an one TB pen drive what you will do you will go to an. --> Shop and you will ask the service personal that hey I want one TB pen drive in Kingston. --> Okay, so you are you are asking the shop salesperson and he is going to check in his tracks and if one is found is going to give it back to give it to you. --> So which means you are making a claim. --> And the sales person. --> Is honoring your claim if a match is found. --> In his warehouse so similarly your part is going to ask what it needs via persistent volume claim. --> PBC let's say my PVC which means for the PVC you need to as application team they will submit the PVC PVC dot email. --> Kinders and PVC under the specification you will ask hey I want. --> Some 10 GB in a network file share that's where this network file share is located it's not it's not specified here in the PVC. --> We are just asking I want to end up as 10 GB that's it. --> So if you submit this PVC to the cluster what cubaness will do is it will try to find a match in this match from the volume pool. --> Out of 10 provision volumes that are there in the volume pool is there anything that is NFS and have 10 GB. --> If one is found then that baby will be bound to this PVC. --> It's more of it's just one to one mapping only one day one baby one baby that's it. --> If no match is found then your PVC will be in pending state. --> Maybe in future if then matching babies created then it will get down to it. --> So step one is your administrator should provision the volumes. --> Step two is you need to submit the volume PVC claim. --> And if these two will be bound. --> And this is the PVC you are referring in your in your part specification. --> Okay which means before applying the part specification you must apply your PVC. --> And it should have a volume bound to it when only your part will start up because. --> The bounded volume should get map loaded to this data directory. --> So by this way you can manage the PVC's cubes it'll get PVC. --> You can see all the PVC's and the statuses get baby and you can manage in statuses. --> And here you are referring only the PVC. --> So by this way they put this much effort to make these things loosely couple and better manage the volumes. --> Okay. --> So that's what this diagram also says. --> So that storage administrator can provision from any of the using any of the storage types. --> Like GCE persistence, AWS blocks, elastic blocks, Azure file, NFS, CSI it can be anything. --> So the flow is administrator provisions the volume maintains a persistent volume pool. --> And your application for your application you are going to submit a PVC file that is the claim. --> And the claim will be honored by the cluster. --> Honored means your PV's will get bound to the PVC. --> And then you will mount the PVC to the container. --> Okay. So here in this flow. --> The very first step is provisioning the volume. --> Is there a way instead of manually provisioning should I configure dynamic provisioning. --> So which means if a PVC is submitted asking for Azure disk 10 GB. --> It just submits the PVC. No PV exists let's say. --> Just in time create a 10 GB volume in an Azure disk and then bound that volume to the PVC. --> If he's asking for NFS 20 GB just in time create it and then provision it and then bound to this PVC. --> That is dynamic volume provisioning. --> We can achieve dynamic volume provisioning with the help of storage class. --> Okay. With the help of storage class we can achieve it. --> So before we see how storage class works. --> Let's quickly do an exercise. --> Hands on on PV and PVC. --> Once we are clear then we can quickly look into the storage class. --> Okay. So if I go back to my machine. --> And I'm going to first create the PV. --> If you still remember we are going to create a PV as the first step. --> That is my PV file. --> Okay. Here it is 8.4 NFS volume. --> This is the one. Kindness and persistent volume. --> The name of the PV. --> I'm going to call this as my PV. --> This is the base and then you can alter it. --> And if you want label you can put it here. --> And you're creating 1 GB in a network file share. --> But I don't have this network file share right now. --> So it's better. --> I can give some kind of host part. --> Which will always be available. --> Or in the host machine. --> It can be any type. --> I'm going to go with the host part. --> I should be having it somewhere here so that I can copy paste. --> This one. --> Okay. --> Basically I want to create a 1 GB in a host machine. --> Part that I've got. --> And that's going to be called as my PV. --> Okay. There are a couple of other mandatory parameters that you need to provide. --> While creating the PV. --> Number one. Access mode. --> Access mode. That is important. --> There are many. --> Three access mode. --> Many. --> Red ones. --> That is the access mode. --> It is. --> Read it once. --> Read only many. Read it many. --> Read it once part. --> Which means you are mounting a part right. --> So that you want multiple parts to access it. --> Only read only loaded essentially only. --> So those things you can specify with the access mode. --> So that is important. You need to provide the access mode. --> In both PV and PVC. --> Okay. So I'm going back here. --> And I'm going to add access mode. --> It should come under this condition. --> Access mode. --> One. --> Read write many. --> This is an important one. --> Another important parameter is. --> Because we are going to use static provisioning. --> Here we are creating the persistent volume by hand. Right. --> Not we are not using dynamic volume provisioning. --> So if you're using static volume provisioning. --> And you don't want the dynamic volume. --> Provisioner. --> To provision the volume. --> It's better to provide the storage class name as an empty. --> We will discuss about that in a moment. --> But for static provisioning volumes you can set it to empty. --> So that if you're not setting it. --> Then there is a default storage class. --> That will take effect and it will immediately provision the volume. --> Okay. In the PVC arrays. --> You can keep it empty. --> Let's say I'm going to call this as an example. --> It's a mini-literals for now. --> I think this is a minimalistic configuration of a PV. --> 1GB in hours. Blah blah blah. Right. --> Let's apply this file 8.4. --> Kubectl get PV. --> Or PV-scout. --> As you can see PVs are cluster scoped. --> You can see there is no namespace information printed here. --> Because it is cluster scoped. --> PVC is namespace scoped. --> PV is cluster scoped. --> Kubectl apply FNF 8.4. --> Storage class name. --> I called it as a storage name class. My bad. --> How if I wrote it as a storage name class? --> Storage class name. --> It's a split-second delay while I'm typing it. --> From here to there. --> Just bear with that. --> Going back to the terminal here. --> Okay. It says PV created. --> Kubectl get PV. --> You can see PV created. --> Access mode. --> Re-game policy. There are couple of re-game policies that we will discuss later. --> Status is available. --> Which means it's not yet bound to any PVC. It's available. --> Claim. No claims are bound to it. So it is empty. --> And the storage class name is printed. --> Okay. So we successfully created a PV. --> Step 1 completed. --> Step 2. For one of the application. --> Let's say we are going to run a MySQL database. --> And MySQL database needs a volume so that it can persist the data. --> Let's say it stores all the data in varlib MySQL. --> We want to persist this. --> So for that it's going to need a volume. --> So it creates a PVC. --> It's going to create a PVC. --> The name of the PVC is. --> Let's say database. --> Okay. So which means we first need to create the PVC. --> Take iFile. --> Open it. So this is the PVC. --> I'm going to call this as my PVC. --> And as you see, as you can see here. --> We are specifying only. --> We need one GB. That's not more than that. --> In fact, you can specify in which type you want. --> Any specific type you can specify. --> And we are also specifying the label selector here. --> Which means in your PV pool you may have hundreds of PV. --> Look for PV with this label. --> Maybe out of 100, maybe three will filter. --> Out of three it will try to allocate one for you. --> So you can also specify label selector. --> Okay. --> And then again I'm going to specify the access mode here. --> You need to specify the same access mode so that that part will be matched to this. --> I'm going to give this these two properties. --> In my PVC as well. --> So that this PVC will match to the PV. --> All right. --> So with this change I'm going to apply this YAML file. --> Get PVC. --> Then give it here. --> Apply. --> Volume claim. --> If you see created. --> Give it here. --> Get PVC. --> Look at this is already says in bound state. --> We created and already the existing PV. --> Bound to this. --> You can get PV and you can verify that. --> Look at it. --> My PVC from NPOV and NS. --> This PV is bound to this PVC. --> Which means all good. --> If you go now. --> And if you submit this YAML file SQL. --> The name of the PVC is my PVC. --> Then your SQL server will look perfectly fine and all the data that is in here will go to the. --> Host. --> Okay. --> So what is more important here is. --> Understanding what PV and what we did this and static provisioning. --> From that access mode. --> And then you can specify the reclaim policy and all that we will discuss after. --> All of this information create a PV. --> And then from your application perspective. --> For your application need you are going to submit a PVC. --> And this PVC is referred in your pod specification. --> Okay. If you are submitting a PV and it is unbound state. --> If you try to spin up the pod it will fail. --> Because it will try to mount the volume. --> There is no volume mapped to the PVC. --> So pod will not start up. --> So you need to first make sure it is bound. --> Then only you need to run your application. --> Okay. --> Alright. So now. --> What I am going to do is. --> I am going to create one more claim. --> Copy this. --> Altogether a different claim. --> I am going to rename this. --> My PVC. --> Let's say called it as MyPVCtest.yml. --> And here I am going to call this as an. --> ImprovPVC. --> We don't need this label center. --> Let's remove this. Keep it simple. --> Access mode. --> 1GB read rate menu. --> That's it. --> And I am going to remove this storage class name also. --> I want 1GB. --> This is the only requirement that I am asking to the Kubernetes. --> Through PVC. --> Right. --> Let's see what happens if I apply this. --> Kubectl. --> Apply. --> My PVC. --> Kubectl. --> Get PVC. --> Type. --> PC. --> So as you can see here. --> This is also showing now bound. --> And it is bound to also an volume. --> The volume name looks very. --> Random. --> And the storage class is standard. --> If I do get PV. --> Kubectl. --> Get PV. --> What happened to my machine. --> Okay. --> Here we go. --> This is the one that we created. --> This is the one that is dynamically provisioned by the Kubernetes. --> Using the storage class name standard. --> Okay. --> So which means. --> If I query for Kubectl. --> Get SC. --> As part of the minikube installation. --> There is one storage class that is there. --> Name of the storage class standard. --> And that is set to default. --> And it's using minikube host path provisioner. --> So I submitted a PVC. --> Without specifying any storage class name. --> So the default one executed. --> So it provisioned in a host path. --> And then it bound that. --> Dynamically provisioned volume to your PVC. --> Which means what all you need to do is. --> If you want to. --> Let's say in your team you are going to use storage from different providers. --> Let's say. --> You have. --> You are going to use Azure disk. --> GCE persistent disk. --> And then you are going to use the host path. --> Let's say. --> So for each of this you are going to have some kind of provisioner. --> Provisioner component. --> Basically installing it as a plugin is going to give the provisioner. --> Provisioner. --> Okay. --> Provisioner is the one that's going to actually provision the word. --> I mean creating the volume there and then. --> This is the one that's going to do every step. --> So by referring this provisioner you need to create a storage class. --> Storage class. --> Storage class. --> Let's say the name of the storage class is Azure. --> This name of the storage class is GCP. --> Name of this storage class is. --> Minikube. --> Let's say. --> So while users are submitting PVC in their email file they can specify which storage class they want to use. --> Let's say for events application he wanted in a Google platform so he can simply give GCP. --> Because storage class is GCP. --> The moment when you submit this storage class will be used. --> This provisioner will be used. --> And dynamically a volume will be created here. --> And then it will be moved down to this one. --> If Colin submits with storage class name as an Azure then this provisioner will be used to dynamically provision. --> If Darlington submits without storage class name then out of these three you can set one as a default. --> So that if anybody submits without a storage class name. --> They will get involved from the Azure system. --> I should just. --> Okay. --> So. --> It's about you're going to statically provision or dynamically provision it. --> So we already seen one sample for statically provisioning it. --> And we just seen one sample on how if a submitted PVC standard provisioner provision dynamically provisions a volume. --> By all means the PVC must map to an PV or else your application part that covers the PVC it won't start up. --> Okay. --> So with this I'm going to keep a pause here. --> Or any questions. --> If no questions it's time to try this and stop. --> The question has come as meanwhile. --> Any questions. --> Is this clear. --> Yes. --> Good. --> Basically resizing the volume. --> That was the question right. --> Yes you can resize the volume not PVC. --> PVC is just about the time when you bound. --> That's the time it will take effect after that. --> You can expand the volume. --> It will work. --> That won't be a problem. --> It's up to one sponsored buying. --> But mostly as we will end up updating the PV and PVC and then submitting it. --> It is extra size provided. --> Let's say you are you are created a PVC. --> Let's say you are using a dynamic provisioning and you said one GB and already I showed this one GB created. --> Now if you want more space then you need to do that update here. --> And in the in the in the already allocated PV it will it will do update it. --> So this this should your back end storage provider must support that kind of expansion. --> It's not not with the Cuban needs. --> It's with the storage provider. --> Some storage provider supports it. --> Some storage doesn't support. --> Right. --> But most of the cloud provider solution supports it. --> OK. --> Right. --> OK. --> It's OK. --> It's reclaimed policy and we are going to discuss that. --> But let me ask you asked about it. --> What what should happen if you delete a PVC. --> OK. --> That's what the reclaimed policy is all about. --> Let's say we have only three values retain recycle and delete. --> So far dynamically provision volumes the second one that you created you seen in the delete because that's the default volume. --> Which means it will be automatically deleted if you delete the PVC. --> If you delete the PVC PV will also the associated PVC will get deleted. --> If if if you set the reclaim policies and delete. --> OK. --> So if you if you don't want that you want the PV to be because PV is holding some data. --> Why deleting the PVC should delete the PV. --> If you want to retain then you can specify the written policy. --> So which means if you delete the PVC PV won't be the associated PV will not be deleted. --> It will be moved to release space so that you can do some manual recovery on the date. --> All right. --> Similar to that recycle is just I think recycle will simply recycle the data and it will. --> What recycle will do. --> I forget it. --> Let me check. --> It is the common thing that we use. --> Recycle recycle recycle means the volume back into the pool of one once it is released from this claim. --> OK. --> It will be back into the unbound. --> It will be set to unbound so that other PVCs can simply use it. --> I think that I will be deleted. --> That I will be deleted and then the status will be changed to unbound. --> Which means if there are any new PVCs come in maybe if match is found that we will get out to it. --> OK. --> That's it. --> Any other questions. --> OK. --> No questions. --> Please give it a try. --> Let me share those all the two embers that I used here that you can have this as a reference. --> Are you know what to do support those two properties. --> But let me copy paste it. --> OK. --> OK. --> OK. --> OK. --> OK. --> OK. --> OK. --> OK. --> OK. --> OK. --> Thank you, thank you, thank you, thank you, thank you. --> Thank you. --> Thank you. --> Thank you. --> Last two minutes to complete this activity. --> Thank you. --> Alright, time up for this activity. --> Let's go to the next section. --> So what we just completed? --> PV, PVC and then storage class. --> Okay, so the next one we are going to discuss is a pretty straightforward resource. --> A simple, very, very simple resource and it has a valid use case where we can go for using one such resource. --> In your cluster, let's say you have three nodes and Ivan is the administrator. --> He wants to see how each of these nodes are performing. --> So for that, he wants to run some kind of software here in this machine so that it can gather some metrics about how these nodes are performing. --> So he wants to run one here, install one software here, one software in this machine so that it can keep observing this node. --> And he can see all of these metrics gathered from these machines in another software and he can see some kind of visualizations. --> So more often an agent like tool, isn't it? --> An agent or team and like to or he wants to run a lock character, one lock character in every node. --> So in Kubernetes world, everything will run as a part. --> So if even wants to deploy this application, he may go with deployment, setting the replicas replicas as entry replica will create three parts. --> But is this will be placed one one copy of part in every node? --> That is not the guarantee. That is not the guarantee that replica setting up. --> We never discussed about that. If you give the number of replicas as number of nodes, it will always place just one in every node. --> It will try to distribute, but it is not a guarantee that it will run one copy of part in one node. --> But here, in case of events, metrics application or logging collector application, we need that guarantee. --> I want one copy of part in every node because metrics collector, logging collector are those application that falls under that nature. --> So for these kind of agent like or daemon like application processes that you want to deploy and Kubernetes Kubernetes community came up with a new resource called daemon set, --> which means if you create a daemon set, daemon set will also create parts. --> But the reconciliation loop logic of daemon set is create one part in every node. --> That is its logic. Let's say if you delete this part, then the next moment daemon set will realize that this node doesn't have one copy of the part and it will immediately create that one. --> One per node. That's its logic. --> Daemon sets logic. --> That's it. --> Daemon set ensures that all nodes run a copy of your part in bracket. --> They also specified some. So what it means is you have three node, but this the application that you're going to run, --> it may not run in node two because node two is using some kind of legacy hardware. --> This software may not work there. --> So you want this daemon set to focus only on N1 and N3 and exclude N2. --> If you're not by default, it will it will consider all the nodes in scope. --> But this time you want to exclude N2 from the scope. --> If that is the case, then you can make use of the labels concept here. --> You can label the node one as type equals modern. --> Node two as a type equals legacy. --> Node three with label as and gives it a label node N3 type equals modern. --> And then the daemon set specification. --> You can specify the node selector and they can tell type equals modern, --> which means now this daemon set will focus on only the nodes that has this label has one copy of this part. --> So by that way, N2 will be excluded. --> OK, that's the reason it says all or some nodes run a copy of a part. --> And the use cases are if you want to run a storage daemon, log collection daemon or node monitoring daemon on every node, --> then you can run those components as a daemon set. --> Kind of a daemon set. And then here they're running a fluently log collector and then a daemon set. --> Typical what we discussed earlier. --> On the type is daemon set, this is going to create one part in every node. --> If you deploy this yaml file, all we have is only one node. --> So in that one node minikube, one daemon set will be running. --> Let's see if there are any daemons running in the kib system namespace already. --> It's a real. --> In the namespace system. --> OK, kube proxy component is running as a daemon set. --> As all we have is only one node, one is current and it's ready and it's up to date and it's available. --> The node sector here is all the node that has the label of the operating system is unix. --> Minikube node is unix, so it's used for all that. --> Maybe in your infrastructure you can verify because you may have multi node. --> You can easily check this. --> All right, any questions on the daemon set? --> Because it's straightforward and skipping the hands on part of daemon set. --> Any questions? --> The use case or how it works? --> Why we need daemon set? --> Why not replicas it at deployment? --> Right, I take the silences and all good. --> Let's switch to the right place. --> OK, in the daemons in the daemon set, it will always be one in every node. --> That's it. --> There is no scale command is not applicable for daemon set. --> OK, so it's not if you want two parts in every node that is not possible with daemons. --> Just only one in every node. --> So scale command and HPA, all those things are not applicable for daemon set. --> Because scale command is not applicable for daemon set, HPA is also not applicable. --> Wherever scale command is applicable, in all those spaces you can configure HPA. --> But for daemon set, scale is not applicable. --> It will always be one. --> That's it. --> One in every node. --> Not two in or three in. --> Sorry, is that a question you're asking? --> The use case for this? --> Yeah, yeah. --> So the typical use case is a log collection daemon. --> If you remember, OK, as you asked about the use case, let me explain this scenario. --> In your cluster, all the applications, let's say these are all the containers that are running and all the applications are going to throw some logs, isn't it? --> There are multiple ways you can aggregate the logs that are coming from all those individual containers. --> For example, each of your application may directly write to a logging backend. --> Like, let's say you have Elasticsearch. --> All the application may directly write from its app code to the logging backend. --> But this is not the approach that we will do because here we have a tight coupling between logging backend and the app code. --> So tomorrow if I switch to Cassandra from Elasticsearch, then that will record code change in every application and new deployment. --> So we won't go with this approach. --> The standard approach that we follow is if we take one single node, you have multiple containers running. --> And in this node, you also have Docker engine installed. --> Docker engine. --> Kubernetes works with the Docker engine to manage the containers, right? --> Kubernetes is not creating the container. --> It works with underlying Docker engine to create the contents and managing it. --> So what we will advise to the application team is, hey, from your app code, write the logs to standard out. --> And standard errors. --> All the containers will throw the logs to its standard out and standard errors. --> OK, so this Docker is the Docker engine. --> We can actually get the logs from standard outputs from all these containers that are running in that node. --> So along in the same node, node one, I can run a daemon, a log collector daemon. --> Let's say Fluentd and this Fluentd will work with the Docker engine to gather all the logs from all of these containers. --> And this will forward to the centralized logging packet. --> Similarly, on another node, I will have a log collector daemon running that will collect all the logs from the containers that are running in this node to the log collector bin. --> Similarly, another node, log collector packet. --> And on top of it, I may run one kibana and user interface with which I can analyze and find some information out of the log. --> So this is one use case where I will run the log collection daemon that I want to run in every node. --> So I can deploy it as a daemon set. --> If not, then you may need to install in a traditional way. --> But as we are in the Kubernetes world, even the Kubernetes components are also running as parts. --> So here the daemon also we are running as a part. --> It runs as like all other application runs as a part. --> Similarly, the same goes for monitoring daemon. --> If you want to monitor every node that are participating in the cluster to see how it is performing. --> So you can run a monitoring daemon that will capture metrics from the nodes and it all will send it to a centralized location. --> And then I can have a storage daemon if there is an host machine provision that you want to run in every node. --> Right. Does this information helps? --> Okay. --> No, no, no, no. No, it's... --> Okay. --> Okay. So first of all, rolling update is not applicable for daemon set. --> Rolling update is applicable only for two resources. --> One is deployment. --> Another one is stateful set that we are yet to discuss. Stateful set. --> So you can't do a rolling update for a daemon set. --> No, no. --> That is with the daemon set. --> Later you can try that sample and do get daemon set and then you can explore it. --> Now let's move to the next object that is stateful set. --> Any idea about stateful set? --> Are you already working with the stateful set? --> Yes, no. I heard about it. --> Okay, fine. --> So let me quickly explain the case for stateful set. --> Why? Why I already have deployment, replica set, daemon set? --> There are many resources. Why I need one another resource? --> What is this? What is the requirement that this resource handles? --> Right. --> You have your Kubernetes cluster that is running all your application workloads. --> Let's say node 1, node 2, node 3 is there. --> All your backend APIs, frontend, it's all running here. --> But think about the data layer, your database. --> In general, for the databases itself, we will have a cluster setup. --> If it is a Mongo cluster, Mongo database, then we will have a cluster M0, M1, M2. --> And it will have some kind of cluster setup between it. --> And it will have some kind of synchronization logic that's happening. --> Some physical replication, logical replication, right? --> And one will be a primary server and two secondary servers will be there. --> If primary goes down, one will be elected as primary. --> One secondary will be elected as primary. --> So already, for the database-like solutions, you will already have some kind of cluster setup. --> And if you look at this cluster setup or the way these clusters are formed, --> first you will start with the first node. --> You will set up something here and then slowly you will join the M1. --> You will join by its name, right? --> You will go to M1, execute some command, provide the M0 name --> so that this will become part of the cluster. --> And then M3, this will become part of the cluster. --> So that's how you will form the database cluster. --> And once you form the cluster, all the replication logic, --> all those things that happens between the scene, it relies on the identity of this one. --> M1 relies on M0. --> I mean, based on the identity only, they are going to send the data to M0 or M3, --> which means this identity is fixed throughout. --> If you change the identity of the server, --> that will break the cluster setup and you need to fix it, right? --> So this being the case, and then remember, it's all handling with data. --> So back in those days, organizations were completely hesitant to run databases --> inside the Kubernetes cluster. --> It was OK to run the backend APIs, frontends, --> because it was initially meant for, I mean, the entire microservices itself, --> it was meant for stateless applications, right? --> But these are all going to maintain some data. --> And if you run it, of course, you can back up your database with volumes. --> But still, if something goes wrong, it will end up with a data loss, right? --> That's going to cost your organization a lot. --> So organizations were running their database cluster outside of the Kubernetes cluster. --> And all the frontend layers, middle layer layers, service layers, --> it's all running in the Kubernetes cluster. --> OK, that's how it was that 2016, 17 and up. --> And then what happened is Kubernetes community started to analyze this part. --> OK, what needs to be done in the Kubernetes --> so that database like solutions can also run in the Kubernetes cluster? --> What are the requirements? --> List down the requirement. --> And let's see whether we can implement one such resource --> with those capabilities. --> And that's how straightforward it was born. --> So it was not a complex requirement. --> It turned out to be a very simple set of requirements. --> Number one, if you create a deployment, if you set the instances to five, --> then all the parts get some random name, isn't it? --> Random name or ran name with random hash ID. --> Correct. --> And if I delete this, then one another new part created with completely random new names. --> So in case of stateful set for the databases, you must have a fixed identity. --> And that fixed identity will also be in an with an ordinal index, ordered ordinal index, --> which means if you create a stateful set, it's also going to create the parts. --> If you set the replicas to five and if the name of your stateful set is Mongo, --> then the instance that will get created is Mongo zero. --> And the second instance will be Mongo one and then Mongo two, Mongo three and Mongo four. --> So if you look at the name, it will be in an ordered index. --> And then if I give five, all the five will start up at the same time in the deployment work. --> But in stateful set, first Mongo zero will start. --> And once it is ready, then it will go for spinning up Mongo one. --> Once it is ready, then it will go for Mongo to Mongo three and Mongo four. --> Ordered startup. --> If I scale down to five to three, only that four and three will get it in. --> First four will get deleted. --> Three will get deleted. --> Deletion will also be in an order from higher order to lower and lower order indexes. --> Okay. Fixed identity, which means if I delete Mongo one, --> then a replacement will be created and that will also get the same name Mongo. --> It is a fixed identity. --> This makes sense, right? --> For example, if you set three, first M zero needs to initialize Mongo zero. --> It needs to set all of the required stuff in it. --> And then your primary server is running. --> Then you are going to spin up the secondary M one. --> And it's going to execute some command and it's going to part of the cluster. --> And then M three sets up and then it's part of the cluster. --> So that makes sense more from this perspective. --> So that's the reason we need to have that identical passion. --> And when it comes to scale and also the one that with the higher index --> will get first deleted and then go here and then go here. --> So far, the stateful set rolling updates are possible. --> And if I do version upgrade, it will also happen one by one. --> First zero will be updated and then one and then two and then three and then four. --> Everything will be ordered. --> OK, so this is one requirement and the naming and then the fixed identity. --> Second one is you created a stateful set. --> Mongo and you have Mongo zero, Mongo one and Mongo two. --> And this is the primary and these two are secondary servers, let's say, --> for the cluster. --> And you have a front end application that's trying to access the front end --> or back end that's trying to access the database. --> We have three instances. --> So generally, in these cases, we create the service. --> The same you are going to do it here. --> To access this or the stateful set, you are going to create a service. --> And once someone calls the service, in general, --> we know that services is going to route the records to one of it. --> But that is not a valid case here. --> Load balancing is not required here because this is the primary server. --> The request can correctly go to primary server. --> We don't want load balance to secondary service because it's all just doing --> some in sync replica logics between them already. --> So we don't need load balancing logic by the service in the stateful set world. --> So what we will do is while creating the service, --> we will explicitly set cluster IP as sender. --> Because when a service is created, it will get a cluster IP, correct? --> That's the one that load balanced to the endpoints. --> If you set cluster IP to none, which means no cluster IP will get assigned to it --> because all these instances got a fixed identity. --> So this front end can simply directly reach to it like M0. --> Let's say the name of the service is Mongo. --> M0.mongo. --> It can simply directly reach to it because this identity is fixed. --> In these cases, all these identities were not fixed. --> So we relied on service. --> So you can create a service and you can set the cluster IP to none. --> If you create a service with cluster IP to none, it's called headless service. --> Headless service. --> We call that service as headless service, which means for stateful set, --> we will create headless service only for stateful set. --> Because that's where we don't need that. --> Load balancing logic is not applicable there. --> This is difference number two. --> And then the last difference is if you create a deployment --> that great replica said that great part, three replicas. --> And in the specification, if you have some volumes, --> PVCs and if you have a volume from AzureDisk, --> so basically the same volume is what shared by all of these replicas. --> It's not that every part will get in different different volume. --> It's just one GB, one GB from the AzureDisk for all the three parts that it's using. --> But if you are going to have the similar logic for the stateful set, --> it makes no sense. --> Primary server, secondary servers is all going to do work on one single backend. --> Then why we need the replication logic? --> Where is the high availability stuff? --> Those things, everything doesn't make any sense here. --> So for the stateful set, if you create it, every instances will get a copy, --> which means every instances will get one GB of Azure, --> one GB of Azure, one GB of Azure, different different volumes. --> So which means while defining the stateful set, --> we won't give PVC instead we will give PVC template. --> With that template, every replicas will get a new volume created --> dynamically provision is going to be assigned here. --> Template, PVC template. --> Okay, so to list down ordered index for instances, --> ordered scale up and scale down, fixed identity, headless service, --> and then this one, PVC template. --> So these are all the differences. --> Requirements that you have implemented, --> and these are the behaviors of an stateful set. --> So with this, you can deploy your data-based solutions --> also into the Kubernetes cluster. --> Okay, it's still evolving. --> There are a lot of limitations with this, --> but we could see many production implementations for stateful set. --> Okay, if you want to manage stateful applications, --> then we will use the stateful set. --> It provides guarantee about the ordering and uniqueness of the party. --> It's not about for database. --> Database has those requirements, so we are doing deployment. --> For your application also, --> if you need some guarantee about the ordering and uniqueness, --> then you can deploy it as a stateful set. --> Okay, and then the use cases are when you will go for stateful set. --> If you are looking for a stable, unique network identifier, --> stable persistent storage, ordered graceful deployment and scaling, --> ordered automatic rolling updates. --> Okay, and example, kind stateful set, --> the meanings are same, three replicas, --> and look at here, volume claim templates, --> because every, all the three individual one by one, --> it's going to get their own volumes. --> And look at the service cluster database and then headless service. --> Okay, so maybe you can just try this one. --> Looks simple here. --> Create a stateful set and do get parts and see the way it is created. --> Try to delete one, see how it is getting created. --> Try to scale up, see whether the ordering is maintained. --> Try to call one just by its identity. --> Okay, just give it a try, please, on the stateful set. --> That's it. --> That's all I had to cover for stateful set, --> which is more than enough for the examination. --> Any questions on the stateful set? --> If no, please give it a try. --> So while I'm explaining, --> I would recommend you to have the QBanis documentation --> for the respective resources open in your screens. --> Parallel refer it, because during the examination, --> you are going to heavily rely on only this documentation, --> QBanis documentation. --> So you should know where to locate, --> where to find stuff, and all those things. --> For the stateful set, --> you can try the sample that is there in the QBanis documentation. --> Just copy this .ml file and then apply it --> and observe the behavior that will do. --> Thank you. --> Get SDS. --> Sorry if I forget to tell that comment. --> You can do QBanis, get stateful set, --> or get SDS to view the stateful set resources. --> 0 out of 3 ready. --> Let me look into the screen. --> Which example do you use? --> What is the application you are deploying from the site? --> To get pods, QBanis will get pods. --> It will just spin one by one. --> Web 0 is in a pending state. --> Which means it is still trying to spin them. --> You can describe Web 0 and see the event section. --> Is that what you are trying? --> QBanis will describe. --> QBanis will describe pod Web 0. --> Let's see. --> This is about volumes. --> Can you look into your YAML file of your stateful set? --> It is not able to bind the volumes. --> Look at the volume claim template section. --> You may want to down down claim templates. --> Here you give storage class name as my storage class. --> But I don't think you have that storage class. --> So better delete the entire storage class attribute. --> So that the default standard storage provision will be used. --> What do you think? --> And yeah, 1GB. --> You are going to get 3GB. --> There you have 3GB minikube. --> That's the next question. --> Let's see. --> Delete and recreate it. --> OK. --> Again, the first pod itself failed scheduling. --> 0 slash 1 pod as unbound. --> Immediate presence or decline. --> Preemption 0. --> Notes are available. --> One preemption is not helpful for scheduling. --> I think it's about the about the minikube. --> The resource available that we have. --> Red ones. --> It's trying to preempt the pods to accommodate these. --> OK. --> Go back. --> Go back to the error message. --> Preemption 0 slash notes are available. --> One preemption is not helpful for scheduling. --> One pod has unbound. --> Immediate persistent. --> Can you get PVC? --> OK. --> Delete that. --> You don't need to delete that. --> Storage class is not reflected. --> OK. --> Delete it. --> www web 0. --> Delete that PVC. --> Now apply. --> Delete your stateful set and apply again. --> Now it is bounded. --> It started. --> You can do get pods now. --> I think by the time web 1 and web 2 will already be spin up. --> Web 0 is running. --> How many replicas there? --> Three, right? --> Stateful set with three. --> So to get pods again. --> If it is running. --> Yeah, if it is running, it should have it should have spin up one more. --> Yeah, it's there. --> Web 1. --> Good job. --> I hope others. --> No, no, it's what it was not bound to previous deployment. --> It was referring a storage class name my storage class. --> But we don't have any storage class with that name. --> So PVC was unbound. --> It started bound to volume. --> So pod was not able to start up. --> So what we did is we modified our PVC claim template. --> To delete the storage class name so that it can use the standard storage class. --> So as part of SDS creation, it created a PVC. --> And that PVC we deleted and then recreated and that work. --> So volume must be mounted, bounded to a PVC so that your pod will start up. --> If volume is not bound to PVC, then your pod won't start. --> That's what happened in our case. --> Is that good? --> Okay. --> Okay. --> Okay. --> Now time is two thirty two. --> So let's take a quick 15 minutes coffee break and be back by two forty seven. --> Forty five. --> Let's say thank you. --> Thanks for listening so far. --> Okay. --> Thank you. --> Okay. --> Okay. --> Okay. --> Okay. --> Okay. --> Okay. --> Okay. --> Okay. --> Okay. --> Okay. --> Okay. --> Okay. --> Okay. --> I'm back. --> Please raise your hands in the teams. --> Just a quick check. --> Thank you. --> Perfect. --> Perfect. --> Okay. --> We are going to discuss our next object. --> Or maybe I will cover this just one or not two objects. --> I have two objects to cover. --> Let me cover the theory part of it because in some exams they ask about it. --> But it is not part of the curriculum. --> But I heard in a couple of exams they ask about it. --> So having an idea about what those resource and the use case for that will really be helpful for you guys. --> Right. --> So the next two resources we are going to talk about this jobs and cron jobs. --> Any idea on what is this? --> And you are already using these resources for your existing applications. --> Jobs. --> Okay. --> Okay. --> Let's look into the use case first. --> And then we can observe the behavior of this. --> Jobs. --> So applications that we've seen so far. --> The engine X application on the card application that we seen. --> Those are some kind of websites or web server. --> That's a long running process. --> It will be running throughout until you stop or delete the deployment. --> It will be running throughout. --> Right. --> But there are some applications of nature which are shortly shortly to one of task which means they have some set of steps to do. --> And once it's it's it's executed to its entirety then that's it. --> So to run. --> Applications of this nature. --> In your cluster we are going to use a resource called jobs resource. --> What is the behavioral difference? --> I can simply run this as an deployment or replicas. --> That will also execute the same logic isn't it. --> But the difference here is you have some set of activities to perform. --> And you start your application starts from here. --> And all the way it executes to its entirety. --> It's successfully completed. --> So at the end it will exit with some success code. --> Let's say zero as a success code. --> If your pod exits with an access code zero. --> I mean your application exists with an access code zero. --> That is. --> That means a successfully it's done its job. --> Then the pod status will be marked as uncompleted. --> Never seen that scenario. --> Having this looking at the status as uncompleted. --> But while executing something went wrong in step number nine or ten. --> And then the process exited. --> With some failure code other than zero let's say minus one. --> Then this is considered as an. --> A failure scenario what job will do is. --> I created a job that created one pod. --> And this is the pod executing now. --> So when I created it will maintain something like this. --> Zero slash one completions. --> Which means it expects for one successful completion. --> So if this happens happy path happens one slash one. --> Pod will be marked completed. --> Job will be marked completed. --> But if this happens then that pod will be resorted again. --> It's going to execute all the steps. --> Expecting it to complete our exit with access code zero. --> If it fails again. --> Again restart again restart. --> So there are some let's say some six tries it's going to do by default in an exponential fashion. --> Even after the six tries if it still fails. --> Then. --> The pod will be marked failed and the job will be marked. --> Which means. --> It never met its completion so job is failed. --> If on this retry somehow if it worked. --> Then it will be one slash one that is job is successful. --> OK. --> So that's that's the idea. --> Only if it is successfully exited then mark completed. --> If it is failed then try to restart it. --> Until you meet the count. --> OK. --> Let's say the nature of application that you are going to run here is some kind of. --> Database migration script. --> If you deploy the same as an replica set with replicas and one. --> The pod will run. --> It will run to its entirety and then it will exit. --> What replica set will do. --> When it exits. --> Its current state is zero. --> So it's immediately restarted or it will create a replacement. --> This will again exit. --> This will again execute. --> Which means you end up replicating the data again and again with the same data. --> So it's not a valid case here. --> Because no matter if it exists with the success code or failure code replica set will always restart it. --> Because it needs to have one copy running always. --> That's the nature of the replica set. --> But that is not the case with the job. --> If you create a job it will also create a pod. --> If this pod successfully completed. --> What mark successfully completion what mark successful completion from within the application it should exit with them. --> Successful then it will be mark completed job is completed. --> If it fails then restart only in the failure scenario it's going to restart. --> Restart or create a replacement until it meets the successful completion count. --> So which means you can actually set the completions count while creating the job. --> You can specify two properties one is how many completions you want. --> Let's say you want 10 completion successful completions. --> Let's say even build one key generated up key generated application. --> Every successful run will give you 10 keys. --> So you are looking for 100 keys. --> So you are looking for 10 completions 10 into 10 10 into 10 it's going to give you 100. --> So what job will do when you create it will be marked as something like this. --> I want 10 completions and the first part will run successfully completed. --> So 1 slash 10 second part will run 2 slash 3 3 slash 10 and then 10 slash 10. --> So it will go one by one. --> So if you want to speed up this process you can set parallelism. --> Parallelism to 5 which means if you get the job at the time you will have five parts running. --> So which means in the moment five will be marked completed. --> So 5 slash 10 in the next iteration another five will run or at any point in time you will have five parts running towards meeting this successful completion count. --> So by the time when you receive 10 completions parts 10 parts that are in completed state and job will be marked completed. --> One use case where we will use this job heavily is work queues to fetch items from and centralized work queue. --> Let's say you have a work queue multiple producers are producing messages to the queue. --> And let's say every day 1 a.m. you want to run a job that's going to read the messages from the queue and process it. --> So the consumer application right that you can run it as a job. --> It's I can run a job with three and here I can specify parallelism as a three. --> And completions it will vary isn't it. --> No matter today you have thousand messages yesterday you had 20000 day before yesterday you have only 50 messages. --> So completions we don't have. --> So if you set only parallelism. --> And leave the completion empty. --> That is the use case for work queue job. --> So which means this job will execute until the queue becomes empty. --> Once you become empty then everything will be more completed job will be more completed. --> OK. So basically I specified three scenarios for you under the job. --> First scenario is excuse me. --> First scenario is you are going to create a job that's going to create one part. --> So initially when you create it will be zero slash one and one completion is expected zeros. --> We didn't receive any completion. --> And after a while if it is successfully completed this will become one slash one and all good. --> This is marked as uncompleted. --> Shatters will be set to complete job is completed. --> That is one. Second is the same where we play with parallelism and completions. --> You have a fixed completion count to reach with this many parallel. --> And the third is you are going to create a job by giving only parallelism. --> No completions you will give. So this is the use case for centralized working on and centralized queue. --> So three kinds of three variants of job. --> OK. So later you can try this. --> I'm leaving it up to you to try the job and job is also going to create a part. --> As I mentioned you can see the status is getting completed. --> These are the three types that I mentioned non-parallel jobs which means only one part is started. --> And the job is completed as soon as that part is successful. --> Parallel job means you are going to specify the completions. --> And parallel jobs with what you means you don't specify completion. --> We will specify only parallelism. --> OK. There are some other concepts here that you can look at later. --> So once a job is completed. --> It will be there in the cluster for around let's say five minutes. --> And this is a this is also a configurable property time to leave. --> So if it is successfully completed it will be around for five minutes. --> And then it will be cleaned up automatically which means this will be deleted. --> Deleting the job will also delete all the parts that it created. --> OK. Any question on the job. --> Is it clear the use case and how it behaves. --> How it is different from the existing resource we discussed. --> Yes. No. --> If it is clear I will skip the hands on. --> If you want me to try I can show some hands on the job. --> OK. Thanks. --> OK. --> OK. --> OK. I think it from technically if it is clear it means all good. --> I have hands on on my file as well as a documentation. --> I'm leaving it up to you to look into it. --> The reason why I'm skipping the job part is because it's not part of CKE. --> But I heard from two participants that there was a question related to current job. --> So you need to first know what job means so that they will understand current job. --> So I covered that part right. --> That's why you try it. --> And if you have any questions or challenges we can discuss about that tomorrow. --> No problem. OK. --> So current job it's same as a job. --> But this comes with the schedule. --> If you are coming from an Linux or an expo we know the cron tabs right. --> Schedule something like every Monday everyone hard every once again. --> Something like that. So we will schedule. --> We want that to execute. --> We want we have a piece of logic that we want to execute every Monday morning 8 a.m. --> Right. The same way you can define a cron job and you can tell. --> Hey I want to run this cron job every 30 seconds. --> And the cron job specification you can actually provide a job specification. --> So which means after that is again cron job will create a job. --> And after that it will create another job after 30 seconds it will get that same another job. --> Another job which means these jobs are going to create the parts that's going to do that short lived activities. --> And more complete. So every 30 seconds you will have a job created by this. --> OK. It's like cron job is sitting on top of the job and then creating the jobs. --> Jobs are going to create the parts and that parts are going to execute some short lived activities. --> OK. So here you can see that we provide the job template. --> Kindness cron job job template and then the schedule which means every minute. --> So a job will be created every minute. --> That will run a container that will print hello from Gibbons. --> OK. It's something that comes with the schedule. It's going to create the job. --> OK. So it's even the same scenario that we discussed here. --> If you want to spin up a job to run every day 1 a.m. to finish the queue you can create a cron job. --> And specify the job specification so that every day 1 a.m. it will create the job object. --> That's going to read all the messages from the queue and process and then it will be more complete. --> So once a job is more completed there is nothing like restarting the same job. --> You need to spin up a new job to do to do the processing for new setup for items. --> OK. Job cron job. --> All right. So starting from the morning. --> The resources that we discussed is we first started with service and then we discussed several types within it. --> And then ingress. That is one resource that we discussed. --> And then we discussed on HPA, VPA, CA, all related to auto scaling. --> And then we discussed on config map, secrets. --> And then after the lunch break we started with PV, PVC and storage class. --> And then we discussed on DMIT set. --> And then and then on stateful set. --> And then we just completed job cron job. Good job guys. --> We discussed many resources today. Good job. Good job. --> So with this for momentarily we are going to stop on the resources perspective. --> Because this pretty much covered all the resources that we used to deploy the application and manage it. --> Right. So now we are going to focus a bit on a security gate. --> Let me repeat that. We are going to talk about a security gate because in your cluster you have three nodes. --> And in your master node you have many components running. --> And one component which is the important one is APS server that exposes your cluster as an API. --> And event, call and everybody is accessing through this only. --> Right. So if I zoom in this APS server part, let's say this is the APS server. --> Whatever the request that comes in to the APS server, let it be it come from event or call in or from me. --> All the requests goes through three security gates. --> Gate one, gate two and gate three. --> Three checks like how we pass through the airport. --> Security check and then custom check, blah, blah, blah. --> Similarly, if if if your request fails in one of the gate, you will be rejected. --> Your request won't be processed. --> So let's say the processing logic resides here. --> All within the APS server, all the three things happens within the APS server component only. --> So the first security gate is authentication. --> Whether he is then valid. --> Be the BMW employee. --> Second check is BMW or Benz. --> Sorry, I'm confused. I was looking for Benz. --> That's right. This is the BMW. --> Okay. Authentication. --> And the second gate is authorization. --> Let's say this event. --> And he is trying to. --> Perform some operation. --> Cube CTL. --> Get secrets. --> He's trying to view the secrets. --> His records goes here. --> And he's checked whether he's an employee of your organization. --> Credential supplied. --> All good. --> So he first passed us first gate. --> Second gate is authorization. --> Whether he is entitled to perform the gate operation on secrets. --> Whether he can view the secrets or not. --> Maybe only managers can view. --> Or only operations people can view not the developers. --> Maybe you have some kind of those checks. --> So that is that will happen at the authorization layer. --> And if he has those permissions, let's say he will move to the third layer that is admission controller. --> So as a cluster administrator, this is the layer where we can have more control. --> So we can do mutation. --> And validation. --> I'm going to give an example of what it means. --> Okay. --> Three gates. --> Authentication. --> Authorization. --> Admission controller. --> After successfully passing through all these three gates. --> The request will be considered for processing by that. --> By the cube. --> It's okay. --> So when it comes to authentication in the in Cuba Nets. --> We don't have any user management or group management. --> Those things are externalized, which means you can you may already have a system. --> You can take it and then integrate with the cube. --> And it's for example, your active directory. --> Or if you have a book. --> Or with the client certificates. --> Bootstrap. --> Token. --> Service account tokens. --> Open ID accounts. --> Are you aware? --> AWS IAM. --> Azure AD. --> So in Cuba Nets, we don't have the concept of managing the users, creating a new users, nothing like that. --> You have an existing system while bootstrapping the API server. --> You need to specify which authentication mechanism you are going to use. --> And then the configuration is related to that. --> So that whenever a request comes in, the API server will use that mechanism. --> And the authentication provider is going to give a response back to the API server. --> If the authentication is successful. --> If it is successful, some data about the user. --> Like the group that he is part of. --> So also details. --> OK, so we don't have much to discuss with respect to authentication because. --> All you already have, you're going to just integrate with the Cuba Nets. --> OK, so in our case, you integrated your organization active directory, let's say. --> And even is a valid user. --> So it will pass through the first gate successfully. --> And the second gate, the request goes to the second gate. --> That is the authorization gate authorization. --> So here in this gate, we are going to check whether even can perform get secret operation or not. --> In Cuba Nets. --> We will use a concept called all back role based access control based access control is just like this. --> So this came all the way from the open ship open ship. --> They contributed this concept to the Cuban Nets community. --> Role based access control. --> This is pretty straightforward. --> So I'm going to explain and we are going to do a quick hands on because you will see a couple of questions on the exam with our back. --> Just list down all the subjects or resources that we discussed and then the verbs that we discussed. --> For example, the verbs are get, create. --> Labs. --> Exact. --> Describe. --> What are the verbs that we discussed? --> Delete. --> List down all the verbs. --> List down all the resources. --> Parts. --> Secrets. --> Config map. --> Deployment. --> Replica set. --> List down all the resources that we discussed. --> That's it. --> With this, we are going to do a simple thing. --> What is that simple thing? --> First, we are going to create. --> A role. --> Rolled up a email kind as enroll. --> And under the specification, here you are going to give a name for your role. --> Let's say the name is. --> Part reader. --> And under the specification, you are going to tell a load verbs. --> Get. --> Describe. --> That's it. --> Two verbs on the resource. --> Part. --> So what it means is you created a role called part reader that if one has that role, he can perform only get and describe operation on the part. --> This is just the role. --> This role is not at us into event. --> Even our call it. --> Okay. --> Once you have the role defined. --> All you are going to do is create one more email file. --> Rolled by me. --> You are going to bind the troll to a specific user or to a group. --> Group information is also not maintained in the cube. --> The next authentication provided the group information. --> So you can also bind the role to a specific group. --> The user is part of the group. --> Then the troll will get affected. --> So same kind as enroll binding. --> Under the specification, you will do something like a. --> Apply to user event role. --> That can be user. --> It can be group. --> It's a event. --> So which means this is the one that actually binds this role to the user called event. --> So which means even can perform only get and describe on the part. --> If you try to delete the deployment or do anything, then you won't be able to do it. --> Is the request will simply be rejected because he is not authorized to do that. --> Okay. --> That's how we do the role based access control. --> So this role and role binding applies at namespace level. --> Only the namespace where you are creating. --> But if you want to create something that should appear at an entire cluster level, then it is cluster role. --> And cluster roll by which means this role and roll binding applies at the end. --> Cluster. --> Roll or bag. --> Okay. --> One sample or bag roll. --> For reader part. --> Perform only get watch and list. --> And another example for cluster role. --> The namescape namescope is skipped here because it's it's at the cluster level. --> Secrets secret reader secret get watch list. --> Only these three operations. --> And if I want to bind that role to an user, then roll binding. --> User event. --> Roll reference part reader. --> The same goes for cluster role binding also. --> Nothing difference. --> So here we are binding it to a group with the name manager instead of user event is group manager. --> Okay. --> So that's it about the role based access control. --> So in our security gate based on the or bag that even has let's say he is allowed to pass because he has the permission. --> So his request will go to the. --> Pass the second gate and then it will go to the third gate. --> That is admission control. --> Admission control. --> So which means this is just a configuration of the APA server itself. --> So by default, kubernetes provides some 30 plus admission controllers. --> 30 plus admission controllers, which means if you aren't properly enabling set of admission controllers, it means your APA server is not properly configured. --> So you need to enable some set of admission controller or your APA server to function properly. --> So it's about there are many features that are just disabled. --> And you need to enable those admission controllers if you want it. --> For example, if even is submitting and request in part specification and in the part specification, he don't he didn't specify resource request and limits. --> He just submitting it without resource request and limits in it. --> So at the admission control layer level, we have an admission control. --> If you enable it, that will simply reject the request, saying, hey, include the resource request and limits. --> I can't admit you. --> I don't give admission for you in my cluster because you are not giving this required details. --> OK, so you can do validation like this if it is found if not rejected. --> In some cases, it can it will also mutate, which means assume he submitted and request for them. --> But it doesn't have any namespace information in it. --> So at the admission control layer level, your admission control can manipulate your request. --> It will just include namespace as default. --> And then it will be considered for processing. --> So mutation also will happen. --> So what you submit is not what you are going to see, because in between an administrator can mutate it at the admission control level. --> Let me give you one valid example here. --> Let's say multiple teams are submitting applications to your cluster and you decided to run a sidecar along with every application. --> And this is going to do some kind of helpful things for you to manage the cluster. --> So when they submit the application team, they submit a specification with only their container definition. --> But here in the admission control layer level, you can include one more container in their specification and then submit for the processing. --> OK, so as administrator, you have more control of this. --> You can do validation or mutation. --> So if you look at the documentation, there are many. --> Mission controllers. --> I think that is valid for you. --> You can simply enable it. --> They pay several levels. --> So this is how it is. --> So we will enable or disable the name of that mission controllers. --> If you want, you can also write your own admission controller. --> OK, let's say. --> You are you observed on behavior, right? --> You submitted a PVC without a storage class name and by default, the standard default storage class. --> Big did that behavior happens because in your cluster default storage class, administrator, administrator, admission controller is enabled. --> If you disable it, then nothing will happen. --> If you submit it. --> I think will happen. --> It will mean no dynamic provision will happen. --> Nothing will get assigned to your PC. --> So it's the AP server functionality itself. --> So you need to enable right set of administration admission controllers for your cluster to behave properly. --> OK, you can refer this documentation. --> They have definition for every every. --> Admission controller, for example, names as exist. --> This is an admission controller. --> It will check all requests on namespace resources other than namespace itself. --> If the namespace reference from a resource doesn't exist, then the request is rejected. --> So if you are trying to submit something with the namespace that doesn't exist, then if this is in place, this will simply this admission control is enabled. --> This will this will detect your request. --> OK, it's in the documentation. --> You can search for it. --> So three gates. --> It is admission, authentication, authorization, admission controller. --> But what is more important for you from the exam perspective is our back role based access control. --> So now we are going to do one quick hands on for our back. --> That's two to three questions you can expect from our back. --> You can give some example from documentation. --> Our back. --> OK, let's go with this bitnames documentation, something that we can trust. --> Mm hmm. --> OK, let's try this one. --> Let me include this in the world. --> Give this you are. --> First of all, and we are in the minikub. --> Don't forget that. --> So in the minikub, we need to just stop your minikub minikub stop and then start it again with our back enabled. --> Enabled our back in the minikub and directly go to this use case one section use case one. --> Just try only use case one. --> That page has many use cases, but I only use case one. --> So what you're going to do, you are going to create a user. --> Employee. --> And he's going to be part of a group bignomy. --> And you're going to add necessary or back policies so that the user named employee. --> Can manage deployments only inside a specific namespace. --> Office namespace called office. --> If you have another namespace in proc, he won't be able to do anything in that namespace. --> So far that you are going to create a user, create a namespace, create a role, create a role binding are going to do it all. --> It has multiple steps, getting the namespace, creating the user credentials, creating the role, creating the role binding and then finally verifying it. --> Okay, some sort of 10 comments or so. --> Let's take some five to 10 minutes and give this a try. --> All based access control. --> Go ahead, go ahead. --> The last topic of the day. --> Let's see who is going to get it working first. --> If you are already completed, just let me know so that you can share your learnings with others. --> Okay, is that any error you are seeing? --> Is that an error that you are facing? --> Who is this by the way? --> So that you can I can look into a screen. --> Shahid. --> Okay, let me check. --> Thanks for that. --> Okay. --> Okay. --> Okay. --> Okay. --> Okay. --> Okay. --> Okay. --> Yes, please. --> Okay, okay. --> It's in. --> Yeah, yeah. --> Just modify the version. --> Look into the documentation and update this one. --> Maybe from from the version beta one, maybe the by the time when they documented was in version beta one. --> I think right now it will be a different version. --> Let me check the version from the documentation for the roll. --> Right. --> Roller roll value. --> It's version one are back authorization q8s. --> I will case that I will slash be one more beta. --> Okay. --> Okay. --> Perfect. --> Perfect. --> You completed that. --> Perfect. --> Perfect. --> Good job. --> Would you like to share your learnings with everyone quickly on what you find if they are comfortable and okay. --> Okay. --> Perfect. --> Perfect. --> All right. --> Last two minutes. --> But don't get this activity. --> Okay. --> Okay. --> Okay. --> Okay, time for this activity. --> For this, let's cover last a little more theory part. --> And then we are all set to end the training. --> We already did some kind of high level review on the resources that we covered. --> Right. --> So we sing this HP, VP, so on. --> So I'm now put this list. --> We just discussed about our back also. --> Okay. --> So there is last one last thing that I would like to discuss and then we can end the training. --> Is that okay? --> Are you you guys already feel? --> It's already a lot for day two. --> What's your feedback? --> Shall we end or can we make use of the next 15 minutes? --> I'm leaving it up to you. --> Okay. --> Okay. --> Okay. --> So by any chance, you are using Helm for packaging your applications. --> Help. --> Okay. --> Okay. --> Okay. --> Okay. --> Everybody in the group already know how to use Helm. --> Right. --> It's my understanding. --> Right. --> Okay. --> Okay. --> No problem. --> I'm just asking for the purpose of examination. --> What do you need to know? --> I will just cover it. --> It may be basics for most of you, but can be under pressure also. --> Right. --> So if I want to just deploy one single service to the Kubernetes, I need to create multiple YAML files. --> So based on the discussions that we had on day one and day two, first you need to create an YAML file for deployment. --> You need to create one YAML file for a service. --> I'll share my screen. --> I hope you are seeing it. --> Deployment and then service and ingress. --> You need to create one YAML. --> And if it is using configuration or config map for the secrets. --> Right. --> So at a minimum, you need to create this many resources. --> And if this is for version one point one, if you're going to deploy version one point two. --> It is using PV and PVC. --> We need to create that. --> It is one point two. --> Maybe changes is not required in all the YAML files. --> Maybe at least it will require here. --> And based on some scenarios, it may require somewhere in the config also. --> Correct. --> So for one single microservice, you need to maintain this many artifacts on different portions. --> So think of maintaining some 20 microservices. --> So this will become already an headache for you. --> Right. --> So we use tools like, for example, you might have used package manager like APT or APTK. --> And then you simply give APTK install and they give some tool name. --> And immediately all of the required things will get installed. --> And you can directly start using the tool. --> But in my case, if I ask you to install this application YAML in the Kubernetes, --> then first you need to create CMN secrets and then PVPVC and then go to deployment and create service in this. --> This can be simplified so that I can do something as simple as this install YAML. --> That should take care of deploying all of these artifacts and the application should be up and running straight away. --> So that's where Helm shines. --> So you are going to package your Kubernetes application as Helm charts. --> So if I package the output that I'm going to get is a chart. --> And this chart will be maintained in the chart repository where all the application charts will be maintained. --> So if I execute a command like Helm install YAML, then it's going to download the chart from there. --> And then it's going to install all of these components. --> I've set up all of these components in the Kubernetes. --> So basically Helm is a package manager for Kubernetes. --> So what we need to do in examination, you will be asked to quickly create a chart for your application. --> And we are going to try the commands tomorrow anyway. --> So what we need to do, there is a proper structure. --> We will create a folder called template and then move all of these files. --> Move all of the files and open file by file. --> And you are going to templateize each one of it, which means you are going to open this file. --> Right now you are seeing an attribute like nginx 1.2. --> You are going to you will template this. --> You need to move this, whatever the component that will change. --> So similarly in every file you are going to open and templateize it all. --> Which means at the end what you all have is a template only. --> So in the same directory as template, you will have a values.aml. --> So this is the one that's going to have all the values. --> For example, in the values.aml, you will see image, node, and this will have the nginx 1.2. --> So whatever the values here, it's going to get populated into these templates. --> And also we will have chart.aml that will have some information about the version and the name of the application. --> In addition to that, we will have some other couple of other things as well. --> So once you have this proper folder structure, let's say you are going to execute Helm package command against this. --> Then this is going to give you the chart. --> This is the one that's already made in the registry repository. --> And this is the chart that you are going to search for it and then install it in the registry. --> So Helm altogether is a different tool. --> It's not part of the core kubriss distribution. --> You can refer the site later, but tomorrow we will do a simple example also to package it. --> In the exam, they would expect us to package Helm chart from the setup artifacts. --> So we should know the basic commands. As long as we know that, then we are all good. --> I think it's enough for the Helm high level overview. --> Okay, that's it. We successfully completed day two. --> And I hope you guys learned a lot of things today. --> Do you have any special specific specific feedback that you want me to correct for day three? --> How about the base in day two? Any specific feedbacks? --> Did you guys enjoy day two? --> Thank you. --> Welcome again. --> Good, good. Thanks, darling. --> Glad to hear that feedback and franchise. --> Thank you. And Colin. --> Okay. --> So this was the question that you asked about the morning, right? --> About dealing with the certificates. --> Am I right? --> Okay, okay. --> Okay, okay. This is noted. --> I will say I will check if I can share something with you tomorrow. --> Thank you, Colin. And over to you, Shahid. --> Perfect. Perfect. Thank you. --> And then Shalangan. --> Yep. Over to you, Shalangan. Yep. --> Thank you. Thank you so much. --> I'm really glad that this day really helped you guys and keep that excitement. --> Tomorrow it's going to be full of activity days. --> I will explain stuff, but you will have more activity to do more from an exam perspective. --> So we will go through the CKA curriculum line by line. --> And we are going to try hands on for that. --> Okay. So get up for tomorrow and enjoy your rest of the evening. --> Thank you. --> Bye.