1:04:49
2022-11-21 11:28:21
44:16
2022-11-21 13:36:53
44:08
2022-11-21 14:41:53
10:27
2022-11-21 15:56:25
Visit the Certified Kubernetes Administrator - Exam Preparation course recordings page
United Arab Emirates - Certified Kubernetes Administrator (CKA) - exam preparation
WEBVTT--> If you are disconnected from the training room, please feel free to join back. --> Okay, so it can be because in the volume we are using an NFS referring NFS server, --> but this NFS server doesn't exist, so it's an valid error. --> It tries to mount the volume, but then it's unable to reach the NFS server, right? --> So I would recommend you to replace this with host path. --> Maybe if you go to 1.5 pot volume ML, replace the NFS with the host path --> because it's available anyways in the host machine. --> Welcome. --> Jury, what you are trying to test is a valid thing, but the host is not your Ubuntu machine. --> The host is the Minikube machine. --> So you need to SSH to the Minikube. --> Can you type Minikube? --> Yeah, Minikube SSH and you will see that. --> Okay, okay, perfect. --> Minikube SSH. --> Welcome to come out. --> You can just type logout, that will do. --> Last four minutes to complete this activity. --> Okay, sure. --> So port forward. --> Okay, the port number within this container is 8080. --> We used 80 for Nginx, but for this custom application, it is exposed in port number 8080. --> 8080 colon 8080. --> I mean, you specified it as 80, right? --> 80. --> So this part. --> You see my screen, right? --> Okay, so let me explain it here. --> You specify it as an 80 colon 80. --> So this is the port inside the container. --> But for our custom application, it is exposed in 8080. --> So if you correct it, then it will do. --> No, no, it can be the same way in the card application that web application is accessible in the port 8080. --> But in case of Nginx, it was accessible in port 80. --> So we used 80 in the Nginx, but for the card, it is 8080. --> So this can be any port in the host machine, available port. --> This can be 8081, 8082, 8080. --> That doesn't matter. --> But the one on the right side is the port number within the container where your application is exposed. --> Okay, kubectl getbots, and it is still in the terminating state. --> Can it do again? --> Getbot, getbots. --> Okay, it's already deleted. --> Welcome. --> Good job, Hermo. --> Okay, guys, I need to stop you guys here because it's already time to move to the next topic. --> And I hope you are clear with the concepts. --> You can try this later. --> Also, let's proceed to the next kubernetes resource. --> Okay, so in our examples, we created multiple parts, the card part and then the Nginx part. --> We created it all. --> So if I go back here and if I do kubectl getbots, there are a couple of parts that are running. --> I can delete the part. --> kubectl delete part, card, my part, our part, all that's going to delete the part. --> So what happens? --> I don't have a part with the name my part. --> kubectl delete part, my part. --> Okay, it's already deleted maybe. --> kubectl delete part. --> Come on. --> Nothing happened. --> We deleted a part and then it deleted. --> Correct. --> So there is no part that is running right now in Luis namespace. --> So now instead of creating a part, I'm going to create an top level resource, one another kubernetes resource. --> I'm going to introduce a new kubernetes resources and those new resources are going to bring in some new capabilities or features to the table. --> Okay, for example, we know we use kubectl run command to create a part. --> This time I'm going to create one top level resource called deployment. --> kubectl create deployment with the image engine next and even you can specify how many replicas you want to run. --> Let's say I want five replicas or five instances of engine next part to create for a deployment with the name my deploy. --> Let's say I'm going to call this deployment as in my deploy. --> Okay, so if I press enter, this is going to create a part and some top level resources. --> As usual, let's go with the dry run approach. --> Let's create create this with yaml file dry run as a client or put as an animal. --> And here you can see now the kind says deployment and some metadata like names and labels and we have a specification. --> So here you can see two things. --> There is a spec under that we have another spec under the template. --> We have the spec. --> So this is the specification that corresponds to deployment resource and this specification corresponds to the part specification the deployment at the end. --> It's going to create the parts. --> This is the part specification. --> Basically, this is the place where you are going to put all the container specification with resource request limits liveness probe readiness probe volume modes. --> Everything will go here apart. --> Specification is encompassed by an deployment specification. --> Okay, so now I'm going to apply this file. --> My deploy. --> Okay, I didn't wrote it to and file it. --> I'm going to write to the file. --> Come on. --> My keyboard is having some split second delay. --> My deployment.aml. --> There you go. --> And now I'm going to apply this one. --> QCTL apply FNF. --> My deployment.aml. --> And it says deployment.apps. --> My deploy created. --> So if I do QCTL get parts. --> Because I specified replicas as a file. --> I can see some five parts are created by this deployment. --> The name of the part starts with my deploy my deploy my deploy so I can easily recognize that these five parts are now created by an deployment top level resource. --> Okay, so now what I'm going to do is I'm going to pick one part. --> Let's say I'm going to take the part that ends with XXLBP. --> Take this part. --> And I'm going to try deleting that part. --> QCTL delete the part. --> With this name. --> The usual delete that we do. --> And it says deleted. --> So which means out of five we deleted one. --> So we expect four parts. --> But we got five and there is one that created 11 seconds ago, which means we deleted this part. --> It's actually deleted. --> But the next moment and replacement is created immediately. --> No matter how many times you are going to delete this part and replacement will get created again and again and again. --> But that is not the case when we deleted nginx my part or card part. --> Because what we created earlier is an standalone part. --> Nobody's there on the top to manage this case if we delete nothing happens. --> But what we created now is an deployment first. --> We created a deployment and this deployment internally. --> It created one replica set and this replica set is the one that created those five parts. --> So we specify the replicas as an five here. --> This information is cascaded down to replica set as a five and this replica set is the one that created this part. --> So the moment when I delete a part, this is the one that creates an replacement part because the only duty of replica set is to always ensure current state matches to the desired state. --> Current state matches to the desired state. --> The desired state is five, which is what is submitted by the developer or the administrator who created this deployment. --> The moment when I delete then the current state became four. --> The desired state is five. --> So there is a mismatch. --> So replica set spin up an additional one part to bring it back to the desired state. --> So basically this guy is the one who is who is tearing it towards the desired state always. --> Okay, because this replica set is run. --> This replica set is running an infinite. --> Reconciliation loop logic always make sure current state matches to the desired state. --> That's its only duty. --> Okay. --> So if I go back here and if you get will also delete that if you have desired state letter. --> Yeah, yeah. --> If you have five and you have one more and the desired set is five, then the next moment when replica set performs reconciliation, they said is five. --> Current state is six. --> So it will delete randomly pick one and delete it to bring back to the. --> So if you look at here, Cube CTL get deploy. --> So this is the deployment we created and this deployment created and replica set. --> This replica set is created by this deployment where it cascaded that configuration here and this replica set is the one that created those five parts. --> You can see the name starts with those replica sets. --> Okay. --> So which means if I want to scale up or scale down, I can there are a couple of ways. --> One, you can imperatively use the Cube CTL scale command deploy my deploy and then I can even you can specify the replicas. --> Let's say I want to scale down from five to three. --> And if I do Cube CTL get parts, I'll read two parts will be terminated. --> I have three. --> If I do more count, then it's a scale up. --> If I do less count, it's going to be a scale down. --> Right. --> This is imperative scaling. --> But you need to update the YAML file and then you need to apply it. --> You need to update the replicas properties in the YAML file. --> And then it apply. --> That is also one way to increase or decrease the scale count. --> Okay. --> So this scale command is also applicable for replica set. --> You can also scale. --> Okay. --> For example, Cube CTL scale RS scale command is applicable for replica set and then the deployment. --> But scale is not applicable for the part, which means you can't do something like Cube CTL scale part. --> That's meaningless because these are top level resources. --> And if we scale it, it's actually scales the part, isn't it? --> So I'm going to scale the replica set. --> And I'm going to specify the replicas as in five right now. --> That replica is holding three three as in state because we already scale the deployment. --> If I do a misspell the replicas. --> And it says RS deploy. --> This is I think I pasted a part name here. --> This is the name of the replica set. --> So it says scaled. --> So which means if I do Cube CTL get RS, I expected to have. --> It was three earlier, so I'm going to scale it to five. --> And it says scale if I do get RS. --> I gave five and it says scale, but then it still says it the three. --> And if you look at the part. --> There are only three parts. --> The reason behind that is yes, you can scale replica set, but that replica set is not and standalone replica set. --> This replica set is now managed by a top level resource. --> So even it accepted, you cut this hierarchy and you try to scale this the next moment. --> The deployment overridden that hey, I am the top level. --> I am the one who created you. --> So you should comply to my desired state. --> So that is three. --> So even though it accepted five immediately overridden as in three by the top level resource. --> Okay, so that is number one. --> So which means you need to scale it only at the top level. --> And if you try to delete the part, it's going to create again and again. --> Similarly, if you try to delete the replica set, it will get created again. --> So the only way to delete is deleting the deployment. --> If I do cube CTL. --> Delete there are two ways to delete one is specify the name of the deployment like this. --> Or if you have the email file, you can also specify the name of the file like this. --> As like how you used for deploy, you can use the email file. --> So it will read the email file match with the current state. --> It will find the object and then delete that resource. --> Okay, so I'm going to delete this deployment. --> If I press enter, this is going to delete the deployment. --> The resource replica set it created because replica set is getting deleted. --> It will also delete all those three parts. --> So basically the whole hierarchy will be deleted. --> Okay, but then just for demonstration sake, I'm going to delete with cascade flag as and false. --> So what it means is deployment replica set parts. --> I'm just deleting only this top level resource, which means I don't want the delete to cascade down. --> So I set the cascade to false. --> So one is a top level resource will be deleted, which means the replica set and parts will be still there. --> So if I do kubectl get deploy, kubectl get deploy, there is no deployment. --> But if I do kubectl get RS, it's still running with the status and three and those three parts will also be running. --> So at this point, if I scale the replica set, it will work perfectly fine because this replica set is now a standalone replica set. --> There is nobody at the top of this guy to override it. --> You can see it's already scaled up. --> So if I do kubectl get parts, I will already see five parts managed by this replica set. --> Okay, so same way, if you delete a replica set, it's going to delete the replica set as well as all the five parts. --> But just for the demo sake, I'm going to delete the replica set with the cascade flag as and false. --> Okay, so kubectl get RS. --> kubectl delete RS, my deploy seven seven D five seven four F four D eight. --> My bad, I missed the full name of the replica set. --> Okay, so if I do kubectl get RS. --> There is no replica set running, but if I do kubectl get parts, because I specified cascade as and false. --> That deleted only that top level replica set resource, which means I have all those five parts. --> All these five parts are now orphan or in other terms, it all already and standalone part. --> There is no top level resource that are managing this, which means if I delete this, it will simply get deleted. --> Nobody is there at the top to create a replacement or whatsoever. --> Okay, so four is five minutes on four, right? --> The reason why I explained this with cascade option is just to provide a better understanding on this hierarchy. --> The deployment create replica set replica set create parts. --> So which means if if victory. --> If she have an application to deploy in the kubernetes, she can deploy test and part. --> She can deploy test and replica set or she can deploy test and deployment. --> If she deploy test and part, then this is more of like an pet that she need to take care because if it goes down, she need to manually bring it up. --> Right. Everything she need to do manually when it comes to maintaining the state and all those things. --> But still she can use all the liveness for readiness for the source request limits, because those are at the part level replica set. --> She will go for if if she want to maintain a specific count. --> Let's say all the time she want to make sure five instances of the application is running. --> Then she will create an she can create an standalone replica set with kindness and replica set. --> And she can specify the replicas as in five. --> So this will do that CS equal DS job. --> It will make sure all the time by such as of the parts are running in the cluster across the cluster. --> Deployment I will go for if you are looking for zero downtime deployment, which means you want to roll out new version of the application during peak business hour with zero downtime. --> Then at the deployment resource level, we can specify something called strategy with which we can make use of this feature. --> For Victor is application. --> She is performing multiple deployments in a day. --> And she's really need these features. --> Something goes wrong. --> She want immediately undo that undo that version deployment for features like this. --> If she's looking for features like this, then she should deploy the application as an deployment. --> But let's say this application changes rarely, which means yearly once or two years once you will do some code changes. --> In those cases, I don't want this undo feature. --> I don't want this zero downtime feature. --> I don't need any of this. --> All I need is as long as I have five running, then it is OK. --> Then she can deploy doesn't replica set. --> But all victory has is just one application. --> Just one instance will run in the cluster and she will take care of it on her own. --> Then she can run it as part. --> So I got the idea right. --> So you are going to run the container. --> But what capabilities and features that you want based on that you are going to deploy it as as an as one of the top level resource. --> OK. --> So which means going back to the middle for every microservice M1 M2 M3 you are going to create a deployment that's going to create and parts. --> Let's say the replicas is five. --> Then it's going to create five parts for M2. --> You will create a deployment that's going to create its own replica set. --> Maybe here the replicas you set it to and for M3 you will create a deployment that's going to create the parts. --> Let's say here you have three three parts will be run. --> OK. So for every microservices you are going to deploy as and deployments and the parts will be running. --> So now we are clear with the part already dive into it. --> We are also clear with replica set. --> It's only duty is CS equals DS job. --> Nothing more than that. --> And the deployment is there. --> But we are yet to deep dive into it. --> Why it is that the features like strategy rolling update those things we are yet to discuss. --> But it's there to provide some additional capabilities or features for zero downtime deployment and those stuff. --> Fine. --> So if I delete this part then this replica set is taking an immediate action to create a replacement. --> If I delete this only this replica set takes an action. --> Correct. --> So how this top level resources identifies their identifies their child resources. --> How this top level resources like how deployment identifies its replica set and its part how replica set identifies its part. --> Yeah. --> Yes. Yes. --> You can do you can quarantine apart. --> You can adapt apart so that I will explain while we talk about labels and label selectors. --> Yes that is very much possible. --> So how this top level resources identifies the child resources. --> Is it by the name because if you look at the name the name of the part starts with the name of the replicas it name of the replicas it starts with the name of the deployment. --> That's the convention that it uses but that is not the logic that it uses to identify the child resources. --> How it works is or how this hierarchy works or how any hierarchy in the kubernetes works is by one simple resource called labels labels is also in kubernetes resource one of the simple resource. --> But an powerful one in the kubernetes for simple and powerful. --> Okay, let me explain that how it is simple and powerful right? --> Let's take the same deployment scenario. --> You created a deployment with the name my deploy correct because while creating the deployment you didn't specified any labels explicitly by default kubernetes will assign a label to this resource. --> Key equals value app equals it will assign a label like this app is the key my deploy is the value and then this deployment is creating an replica set after the replica set is created. --> It will assign this label to the replica set in addition to this label kubernetes will also add a label called pod hash template. --> This replica set is created right? --> It will get an unique ID a long random hash like a 1 4 b c k something like this. --> So it will also put that as an label here pod hash template as a key and then the that long random hash as a value. --> So basically the replica set will have two labels and this replica set created those five parts. --> So in all those five parts, you will see these two labels. --> Okay, so basically the step one is assigning the labels labels are assigned the way how it is assigned. --> I told already the it put a label and this replica set created these spots and then it assigned those label on these resources. --> You can assign label to any kubernetes resources. --> If I go back here, let's say if you want to view the labels. --> Cube CTL get you can describe it. --> That is one way and you can see the labels or you can simply include this flag show labels that will show an additional column where we can quickly see the labels. --> So for example, the previously created deployment. --> It has four parts and it has all those two labels on it the label that came all the way from the deployment and then the label on the replica set. --> Basically these two labels are there on the replica set. --> Okay, so you can assign label to any resources part deployment replica set node any kubernetes resource. --> You can assign a label. For example, I can assign a label for this part. --> Cube CTL label the part. --> With the name. --> And key equals value. Let's say ENV equals product. --> And if I do get parts which show labels you can see that label is assigned here. --> Okay, if I want to delete a label then the same key minus. --> This is going to delete that entire label. --> As you can see, I no longer have that label ENV equal to product. --> So as simple as that right so going back here, that's how the deployments are assigning the labels to the resources. --> Fine. This is about assigning the label. --> This is just. --> Part one of the story. --> Part two of the story is these labels will be used. --> In the selector queries. --> Which means if I go back here, it's I'm going to assign that label back. --> Where is the label command to assign it? --> Environment equals dev. --> Okay, I assigned a part. I sent a label. If I do Cube CTL get parts. --> It shows four parts. --> So now I can write an selector query. --> I find a friend selector. --> And here I can specify get all the parts that has the label environment equals dev. --> So as you can see only one part of that because that's the only one that has the that label. --> So similarly you can do some more thing like all the parts which has the label that doesn't have this value if it has a key or if it is in one of it. --> So there are many many operators that you can use it here to filter down the result set. --> Okay, so you can use labels in an selector query like this. --> So if you remember I mentioned about this replica sets duty that is CS equals DS job. --> And I also mentioned it's running an infinite reconciliation. --> Loop logic what it actually does is it execute this command Cube CTL get parts. --> I find a friend selector and then these two labels. --> This is the exact query that this is the exact query that replica set is executed. --> So if it executes this command, it will simply look for the count how many results set returned by this query 3. --> What is the desired state for mismatch create a new part? --> Assign this to label on it the next moment when it does then four will match for four equals. --> Okay, so this is the exact query that replica set execute. --> So this overall hierarchy is completely loosely coupled. --> It's all tied up only by the labels nothing more than that. --> Okay, so now to the scenario that someone has mentioned right there's the question related to quarantining and adapting a part. --> So let's say I'm going to Cube CTL get parts. --> Okay, so here you can see we have four parts that has this labels my deploy and then the part template hash blah blah blah stuff. --> Right. So now I'm going to apply the same Cube CTL apply if any. --> I'm going to create the deployment again and if you do Cube CTL get deploy. --> It has the file. --> Cube CTL get parts. --> So if you if you look at here it has five four and it was its five. --> So basically this is the only part that is created newly remaining others are simply just got adopted by the replica set --> because when the first time it tried to perform the reconciliation loop looking for a specific label. --> It already got four and it already got four entries matching. --> So it simply adapted those part and it's pinned up just one new point. --> Okay, so this is that adapting scenario show labels. --> You can see it's created all so the replica set it created it got the same ID. --> So if it is going to create a different replica set with a different ID then. --> Any parts will be created. --> Okay, so in general in the etcd database it stores the previous replica set IDs with IDs available simply reusing that same replica set right fine. --> So now look to the scenario of quarantining. --> Let's say I'm going to delete. --> The label from this part KB 5 gr power right and let's see how we are going to quarantine this part. --> Cube CTL label part. --> Name of the part. --> Key minus. --> Cube CTL label part. --> Okay, Mr. R at the end. --> And if I do keep CTL get parts again. --> See how many are there now one is newly created earlier. --> It was five now I have six and this container I removed that app equals my deploy. --> Which means. --> When the after I removing that part the replica set did the reconciliation loop logic and it does that Cube CTL get part if an FN selector with those two labels only four got matched because this doesn't have that label. --> But the desired is five so it's been up one part this one to make it to so which means this is now an orphan. --> If you delete this nothing will happen. --> Which means we quarantine a part. --> If I put the label back then it will get adopted. --> Let's say I'm going to put the label back on the same part app equals. --> My deploy. --> Okay. --> Get parts. --> So you can see here now from the six. --> Six match to the label, which is one extra. --> So it randomly picked one and deleted it. --> So now the count become five. --> Okay, the reason why we are explaining this. --> Because now you are clear with how this hierarchy is constructed and how it purely works by the labels. --> What it all takes to assign a label and assign a label and how to use labels in the selector query. --> And with the help of label how to quarantine and adopt and part. --> Okay, just it's looking only the labels that it's not even seeing whether the one that it is going to adapt belongs to the same family or not. --> It's not even looking that as long as the label matches just adopts our coordinates. --> Okay. --> So with this go ahead. --> Cube controller. --> Good question. --> Right. --> Cube controller. --> Cube controller. --> So actually we created a part. --> We created replica set deployment. --> We are going to deal work with demon set stateful set job. --> Cron job. --> So there are dozens of core kubernetes resources and there are there exists controller for every resources. --> All of these controllers are rolled up into one single binary called cube controller. --> So for every resources it it works the way that we are discussing because the controller is taking care of that logic for the replica set. --> We have replica set controller that's performing that logic. --> Correct. --> Correct cube controller will work with the API server. --> And then and then it go to the scheduler and then scheduler will send to the cubelet and then scheduler will simply assign a node. --> That's it. --> If you scale up or scale down and then the cubelet is the one that's performing the end action. --> Okay. --> So with this I'm going to give a quick pass here for you to try these commands. --> Yep. --> Go ahead please. --> Okay. --> That's zero downtime deployment. --> It works in a different way that we will explore later. --> The reason why I explained that unlabeling part here is for better understanding. --> Maybe the use case you can think of. --> You are seeing that there is something going wrong with this specific part and you suspect that it may got compromised and you want to immediately unlabel it and then disconnect from the network. --> So that you can troubleshoot or analyze that specific container separately something like that. --> That's a rare case. --> Right. --> But for the zero downtime deployment it's going to take a different approach that we will discuss when we talk about deployment resource. --> You are right. --> You are right. --> But we are at the mini queue where you have cluster administrative access so you are able to do it. --> But in your cluster level you can put some access control on. --> You won't. --> Nobody will be able to do it or something like that. --> And yeah so you can have more control on the labels part. --> But yes you are right. --> If different teams are going to work in the same namespace and if they are sharing the same labels yes things will mess up. --> So it's important to have a unique label for your applications as per your organization structure. --> Let's say if you are under a specific tribe under a specific squad specific team then have the labels reflect your structure over the organization structure so that you can easily manage it all. --> Okay I'm going to paste this commands in the etherpad. --> Creating and deployment. --> Oh my God what happened to this page. --> Okay still there. --> Okay here and there I made some typo errors so please ignore this commands while you are trying it. --> But in a nutshell what you are going to try now in this hands on time you are going to create a deployment by a file and you are going to understand that hierarchy. --> How replicas and replicas it creates the part correct. --> And then you are going to try the scale command delete. --> Delete commands and then you are going to learn. --> Labels. --> What it all takes to assign or unless in a label and using the labels in the selector query. --> And if you are interested you can try those quarantine and adapting containers by labeling and labeling. --> Okay replica set labels label selector. --> These three new resources that we learned now. --> Let's take some 10 to 15 minutes to complete this hands on.