2:14:39
2025-06-26 09:07:32
1:12:32
2025-06-26 09:11:34
6:42
2025-06-26 11:08:41
35:51
2025-06-26 11:24:37
38:41
2025-06-26 13:21:35
20:37
2025-06-26 15:06:35
51:46
2025-06-27 09:06:19
58:45
2025-06-27 09:06:25
36:01
2025-06-27 11:26:09
1:12:38
2025-06-27 13:45:09
Visit the Kubernetes Comprehensive 2-Day course recordings page
WEBVTT--> Should come up here shortly. --> Okay, so namespaces. --> So lesson two, we're going to learn to isolate resources effectively using namespaces, taints, and tolerations. --> K.H. uses namespaces to isolate resources within an individual cluster. --> The names for resources within a cluster need to be unique within a namespace. --> Although the cluster may contain a default namespace, it is generally good practice to use a unique namespace. --> for application workloads that is not in either the default or Kube system namespaces. --> Kube system namespace is generally used for control plane component. For example, when deploying --> a logging service, it is good practice to create a new namespace for that's correct. So, --> or if you have multiple databases that are deployed with an operator. So you might have an operator. In --> this case, we have an operator called Minio, which is an S3 open source, and then --> it spawns two tenants. One is called Loki Minio, the other is GitLab Minio. And in this case, --> it's considered good practice to have three separate namespaces for that. So one for each tenant --> database and one for the operator. And that can really save you time on troubleshooting. It's going --> to enable separation between your your applications. Okay, namespace. --> constraints. So K-8 allows you to define default memory and CPU requests or --> limits for namespace. You can define a range of minimum and maximum memory and --> CPU constraints. This can be useful when using a helm chart that lacks a --> resource selection in the values that YAML file. And there are a lot of them out --> there that still lack that because the upstream developers don't have a Helm --> cap. --> so they publish a helm chart but it lacks a lot of the uh a lot of what we expect to see --> and a values that yamlphod you can define an overall memory and cp ucto you can use this to assign --> a maximum amount of resources for a namespace this could be useful to define a single --> path limit the default namespace development cluster this would trigger an error when devops --> personnel try to deploy a resource in default namespace instead of creating a unique namespace. --> Namespaces can be labeled just like nodes. --> So just like how we labeled our nodes a little while ago, we can label namespaces. --> And that's useful for features such as the new Gateway API to enable shared access. --> Additional feature sets in K-8s may require labels to work across namespaces. --> And you can allow one namespace to work with a service from another namespace. --> other namespace and then one other namespaces can be deleted with a simple cute control command --> to be careful using this in production um is it will delete all resources related to that --> namespace and the ability to create to delete a namespace can be useful when testing a cluster --> sometimes a faulty pod is stuck and it will take several moments to spin down or may not --> spin down at all if the namespace isn't being used for the --> resources it can just be deleted and recreated saving you valuable time right we have a --> practical application let's get all the namespaces in the cluster and we want to go out of that --> folder so just cd you want to get back to our um our root yeah there you go less see what's in there --> All right, so we have our namespaces default, --> Cube node lease, Cube Public, and Cube System. --> Let's describe the namespace default. --> So this has no constraints on it. --> And so it has no labels as far as using it with Gateway API --> for shared app. --> All right, so now we're going to create a namespace --> call app a and query it and you can see that it's there we can describe it and note the --> little metadata for the name name equals that okay let's label the namespace we just created with a --> development link and check and see if it exists there you go and now we're going to delete --> the namespace and verify that it's been we have these these VMs sometimes also have a slight --> delay in them okay so now we're going to create a pod deployment in the app a namespace and apply --> so then pop dash deployment. yammall let's find let's find out let's save it and then we'll --> apply it um yeah so it's apply minus f and then it's the same name dot yamil that you just --> Use the VEM on to edit. --> There you go. --> NameSpace app A is not found. --> So now we're going to create namespace app. --> We're going to delete the pod before we try to apply it again. --> Well, actually, no, we don't need to delete it because --> they didn't apply it. --> So just, you know, reapply it. --> And then check the fog. --> Okay, now here comes the fun part. --> Keep control delete namespace epa. --> And just a little pause. --> And I'm sure you can guess what. --> Pause is taking place. --> It's gone. --> Let's check the namespace. --> And that is gone as well. --> So you can see it deletes everything within that namespace. --> It's all gone. --> Non-recoverable. --> But it's great for development and testing when you're using complex helm charts --> and you have pods that are stuck. --> And, you know, I've worked with complex helm charts. --> It can take five, six, six, --> minutes to spin down after a mistake. --> And just delete the namespace, it's all gone. --> All right, so taints, what are taints? --> Yeah, no, once you delete it, --> you deleted the deployment and everything in that namespace. --> So your file still exists. --> So if you do LS, your file is still there, --> so you could recreate the pod. --> But the actual pod deployment itself --> was deleted from the Kubernetes. --> All right. So let's go on. I added taints and tolerations to this section because it's part of isolating resources. And it wasn't in the description, but it's very. All right. So node selectors are a type of node affinity and that may be used to schedule a pod on a specific node. It allows pods to be attracted on two specific nodes. Taints are the opposite. They're --> They allow a node to repel a pod. --> Taints are applied directly to the node. --> Taints are viewable in the node resources. --> Tolerations allow a pod to be scheduled to a node with a matching taint. --> Colerations are applied to pods. --> Tolerations allow scheduling. --> They don't guarantee scheduling if other conditions are not met. --> work with taints to ensure pods are not scheduled onto inappropriate. So we're going to --> deploy a pod to a tainted node to taint the control plane node. But before you do that, let's --> describe the control plane. And let's scroll up to where it says taints. So yeah, I think, --> there you go, a little bit more, I think, a little bit more, I guess. What right there. --> Oh, okay, right go. --> Say, yeah, okay. --> So now we are going to, and if you use it in or. --> And we're going to taint that node. --> There we go, pod equals true, no schedule. --> So now we change it from none to taints, pop equals true, no schedule. --> Okay, copy the previous pod deployment. --> That yam will make it easy on you to a pod dash taint.com. --> We don't have to type the whole thing out. --> I'm going to add a node selector to the pod dash taint yamil file and apply it. --> And we're going to use a node selector that is already on every control plane. --> And it is node roll Kubernetes I.O. Control plane. True. Yep. All control plane nodes. All --> and we're going to apply it. Okay. So we don't need to delete it. --> So create the namespace and then apply the pod again. Okay now to control give pods. And that name --> space and says pending. Why is it? Let's look at it. Let's describe that pod. So we tainted the node, --> and now the node has an untolerated taint. So it has a taint. The pod needs a toleration --> to attach to the tainted node. So remember when we tainted the node, and now it has a taint, --> which allows it to repel pods, right? And so now we need to add a toleration and the specs that --> again, delete the pod. If you notice, there was no toleration in there. You --> You can see in your YAML file under node selectors. --> You have your node selector, load out your tolerations, --> but there's no toleration for what we added in. --> Okay, so now, the previous pod taint.Yamel --> to pod tolerated.com. --> We're gonna add a toleration to the pod tolerated. --> YAML file. --> Pull that back up and we'll take a look at the file here. --> You have a more recent version of the Kubernetes API --> by dot two, then what I used here for mini cubes. --> Let's see if we can figure this out. --> That is pod-collarated. --> Yeah, you can look at something else. --> I'm gonna try and see if I can figure out what, --> okay, let's try to run that again. --> And look at that, something. --> Huh, interesting. --> Yeah, I don't have that error on my version of mini cubes, --> so I don't know. --> what's going on, but you have a newer version because they just created your workspace. --> So I tested these out and they worked. --> So I don't see an error in what you typed unless we just have a miss spelling. --> Okay, let's try something here. --> Okay, so let's take a look at the node and get the label. --> Okay. --> Take a look at the control plane label. --> Okay. --> Notice where it ends with equal. --> And we need a true at the end of it. --> So let's go ahead and add a true. --> Let's the Cube Control label node mini cube. --> And type that out with equals true. --> Yep. --> All right, so your pod won't be running. --> I don't believe, but we can try to apply it again. --> But I don't think this will fix it. --> Try to apply the pod again, but I don't think it's going to run. --> something missing with your version of mini cube something changed or we have an error in that file --> somehow and i'm looking at my version which runs just fine and yours appears to be exactly the same --> okay yeah one last time pull up um actually let me check one thing shut my minicube --> all right that is pod all the rated dynamo so i'm looking at mine let's see no schedule --> Not equal. --> Node selector. --> No, the whole thing is a little claim. --> Tolerations, little case. --> Oh, I see what's missing here. --> Okay. --> This, uh, do you see what's missing? --> This is my bad. --> I think I, I made the mistake on this one. --> No? --> Uh, here, let me go back. --> Sorry. --> I think I, I goof, but this is a good lesson, actually. --> So, sorry, my slides are stuck, I'm going to show you here. --> Okay, under tolerations, where you see effect, okay, look at containers and look at where --> name starts. --> See how it's a hyphen in front of it, but the hyphen needs to be under the L for tolerations. --> So we need a hyphen in front of effect underneath the L. --> So instead of the E under the L, we need a hyphen and a space and then move all the --> four of those key, key value here. --> And let's go ahead and fly it. --> Okay, now we'll go ahead and describe that. --> Pod minus N-I. --> So now if we look at our tolerations, --> we can see pod equals true and no schedule. --> Okay, and so that matches, --> and if we pull up the node, --> yep, describe the node. --> I'll see that that matches the tent goal to the top. --> There we go, taint. --> Oh, let's go back down what it is, a little more. --> There we go, taints, --> pod equals true and no schedule. --> So you can need to you where you can taint the node, --> you tolerate the pod, otherwise the pod bounces from the node, even though we selected the node --> properly with the node. We've now deployed a pod using namespace, note selector, taints, and tolerations --> on a specific type of node. We can pin it to control plane, or we can make sure if something --> doesn't have that toleration, it is automatically rejected from control plane. Now, can you think --> of why that might be used in a production. Exactly. Or if you want to pin it to a storage --> node, for example, right? So your workloads, stateless workloads, if you're running --> stateless workloads, should stay within their worker or agent nodes. And not a spray --> and prey effect, which is how, unfortunately, some developers use Kubernetes as they just --> deployed across all nodes. And then you end up with resource constraints on a control. --> like note for example so all right and now we're going to destroy the namespace f a and hold of its --> resources in development yes it's a yeah it's a quicker way to just delete everything in that namespace --> yeah and we'll verify that's been destroyed all right and lesson two we learned how to use namespaces --> to separate resources how namespaces can have constraints for memory and CPEs and --> CPU, how namespaces can be labeled, how to apply taints to a node to repel pods, how to apply --> tolerations to a pod to enable scheduling to a node, and how to destroy a namespace and all of its --> resources. Yes, yes. So I don't have it in front me. I think that's four, less than four or five. --> All right, here and start lesson three. --> Yeah, you feel like you're picking it up. --> So there are a couple of ways to learn Kubernetes. --> It's a steep learning curve. --> And there's a lot of information out there --> on Kubernetes. --> So what I do is because I train with practical Kubernetes, --> so it's comprehensive but practical, --> what you'll encounter day in and day out --> a cluster. And if you become good at what you're going to encounter day in and day out, --> when you run into that one-off that you may see only once or twice, you can take the time to go --> do that deep dive down that rabbit hole and learn that concept. Oftentimes though, those --> extreme concepts in Kubernetes were put in there by, remember when I mentioned three years --> after Kubernetes started, they became cloud native to bring in the cloud engineers and the --> Cloud engineers brought a lot of tools and concepts with them that weren't Kubernetes-related. --> But they had the funding behind them. --> They supported the Kubernetes teams. --> So they have a lot of concepts that they funded to get put into Kubernetes. --> And that may be the only team that uses that concept. --> And so why spend months studying all of that if you're never going to encounter that in a cluster, right? --> So that's kind of my approach to training. --> and then I like to train with errors, because you'll remember the errors. --> If I give you perfectly formatted YAML files and all you do is just apply for six hours, --> you probably won't remember a whole lot. --> So, okay, we're going to get into deployments. --> All right. --> Deployments enable workloads that do not typically maintain state within the workload. --> Setting the desired state of a deployment, the deployment controller changes the state of the deployment --> to match the desired state. --> Deployments enable a replica set to manage the pods. --> The deployment YAML file enables unique labels --> for each deployment. --> And the spec selector field is used by the replica set --> to find the pods to manage. --> And we'll see that this replica set assigns a unique hash --> at the end of the pod name. --> And that allows the replica set to try. --> track individual pods within that specific repo. --> We are going to ensure a fresh mini-cube cluster stop. --> Mini-Cube stop. --> I don't think we want to restart the system doing. --> I think that's why your terminal was delayed earlier. --> Looks like you've been downloading updates without even --> really automated updates. --> I love them. --> And then MiniCube delete, hyphen, hyphen, all. --> Disconnected from the network. --> What in the world? --> Well, okay, hopefully here. --> You do need the network in there to pull containers in, --> but other than you can do it. --> Okay, interesting. --> Yeah, I don't have that online, so I don't know why that is. --> Yeah, so create the EngineX deployment. --> Yomel file. --> Type that in, I will be right back. --> And note how this is different than the pod we were doing. --> This is a deployment with a replica set, --> and we're using a replica set controller, --> and so we need to label everything. --> Yeah, it looks like a good video. --> Where we can, we'll try to reuse some of these files --> once you type them out. --> Oh, we stopped Minicube, but we didn't create a new cluster. --> So that is your cook control working, --> and it is saying, hey, there's no API server available. --> Yep, that's one of the many errors you'll see. --> So the connection refused, there's no API server available. --> Now it doesn't tell you there's no APS. --> That's an area where it would be nice if they had more verbose statements for the errors. --> So a mini-Cube start. --> We're going to actually use that command quite a bit through our practices over the two days. --> And here in a minute, we'll take a brief. --> take a break for lunch and then when we come back after lunch, we'll not the rest of this. --> Lesson out. --> All right. --> Let's get the deployments. --> Set the ready state and let's view the replica set. --> All right. --> And notice a unique hash for the replica set added to the deployment name. --> So that is for that particular replica set. --> Now do the next step will be the view the pods. --> Okay, so we take that one step further. --> So we see the name of the pod, EngineNet's deployment replica set name, which is 6CF. --> But then notice how each pod has a unique five, often-americ hash at the end of that, and that is the pod hash for each one. --> So it applies replica sets apply two hashes. --> All right, let's describe the deployment. --> Notice that we have events there. --> It says scaled up replica set, engine X deployment from zero to three. --> So it started with zero and then scaled it up to three. --> Actually, hang on. --> Yeah, so, okay. --> So it's three pods, not two. --> All right. --> So we describe the deployment. --> There we go. --> Oh, describe, let's see. --> Okay, yep, appa set. --> Okay. --> So sorry, my slides are stuck there. --> Okay, so we have three. We did it with three instead of two, but we're going to scale it from three to four replica. --> Yes. Okay. And we're going to verify the scale up was successful and two new pods were created. Actually, one new pod was created because we already put three. And it has one new pod hash. So the replica set hash has one new pod hash. --> So 48 seconds ago, we added one new pod and it has a hash of 4F. --> Was it ZQZ or something? --> Hard to read on there. --> Okay. --> Now we're going to scale it down. --> So very similar. --> We're going to take it from four to three. --> And guess which one it deleted? --> It deleted the most recent one. --> All right. --> So now we're going to delete the deployment and I do that all the time. --> I have a stuck B on my keyboard. --> typical. --> And we'll verify that that was successful. --> Not found, yeah. --> And same with the --> all right. --> Now we're going to go on to the staples sets. --> All right. --> Staple sets enable our pods to stick to a particular name --> instead of changing when they restart. --> And the deployment, the name will end with a hash --> that changes upon restart. --> restart. So if you were to have restarted those pods, you would have noticed they would --> have had a different hash name. In a deployment, the name will end in a hash that --> changes upon restart. In a stateful set, the name will end with a number that --> starts with a zero and increments from there. A stateful set of two pods will --> contain the same pod name, but each will end with a different number, --> such as stapleset x0 or stateful set dash 1. A stateful set guarantees --> the order and uniqueness of each pod. This is useful when using database applications where you need to access the same pod using an ID and a connection. When the pod restarts and the pod IP address changes, the name and endpoint stay the same for connection purposes. Unlike deleting a deployment, when you delete a stateful set, there is no guarantee that all of the pods will be deleted. The best practice is the scale of stateful set. --> set down to zero and then delete the stateful set. Stateful sets typically utilize a headless --> service. However, when deploying databases in a production cluster, you may find that a headless --> service may not serve your needs and a normal service is required. So there are certain connections --> to databases that won't work with a headless service. Stateful sets are typically provisioned with a --> persistent volume, although you will see an example later in this lesson. And sometimes upstream --> providers have errors in their headless service. And so when you go to connect using a headless --> service, it actually won't work. And so that has been encountered with different distributions --> of database containers that are out there. And then you --> you just have to wait for them to fix it. --> Yeah, so a headless service enables DNS to be used. --> And so we'll go into that a little bit later on --> when we get to DNS, yeah, and services. --> But yeah, so headless enables you to use a sticky connection. --> So for example, if you have a stateless workload, --> let's see you're running an API and it's connected --> to a Mongo database, it will use a library --> to figure out who the primary node is. --> and it will write to that primary node, --> but it will read from all of the nodes. --> And it does that through the use of a sticky connection --> and an actual pod name that no matter if that pod restarts --> or that container restarts, that name remains the same --> so the DNS can always hit that, that name. --> So use primarily for databases --> for a stateful application. --> I know one, one time for a stateful application and that's it. --> All right, we are going to create a staple set in YAMLFAR. --> Notice that the stable set created two pods ending in zero and one. --> So how does this differ from the prior engine X deployment with replicits? --> Correct. --> All right. --> We're going to scale the staple set now. --> So let's scale a staple set from two pods to one pod. --> So we have one of one, and then when we describe the stapleset, we can see that. --> What is the name of the remaining pod? --> Well, the stapleset deleted pod one, but kept pod zero. --> So it works backwards. --> All right, let's delete the stapleset. --> Correct. --> Yeah, and you can look at the pods right now before we scale. --> So we would have had also an IngenX staple set dash one. --> Yep, but we still have a stapleset. --> You notice that even though it has no pods. --> It's still there. --> Now we're going to delete this staple set. --> Correct. --> And that's because of their sticky nature. --> And if you try to delete it without scaling it first, it may leave pods. --> Right. --> The next one, it's 1232, and the next one will be Damon sets. --> So let's go ahead and take a break. --> And then lunch was 30 minutes. --> How's that work? --> And then we'll come back at, --> Let's see, Eastern Time will come back at 102. --> How's that? --> All right, see you then. --> I have a poor network connection today. --> Okay, good. --> Let's go ahead and get started with Damon sets. --> So how do you feel about stateful sets and deployments? --> Okay, unlike deployments, which manage stateless applications, --> or Stapful sets, which manage stateful applications, --> Damon sets are designed to run a pod on all --> or specific nodes in a cluster. --> This is useful in logging or monitoring application --> where you desire a node level service, --> such as a log collector on every node. --> This is used by container networking interfaces like CILUM --> to install a CILium agent pod to every node. --> Additionally, when a new node, --> is created, a daemon set will automatically spin up a pod --> on that new node. --> You can set a daemon set to only run on specific nodes --> using the node selector spec. --> Okay, we're going to ensure a fresh mini-cube cluster. --> Correct. --> Okay, so we're actually going to do, --> go ahead and take that down, sorry about that. --> We're going to create a multi-node cluster --> to demonstrate a daemon set. --> That's a little slow on the slides. --> And I believe you have to delete it every time, so unfortunately. --> So they're technically worker nodes, but they don't have a role name. --> So you're just empty nodes. --> So this is not an H.A cluster. It's just a single control plane. --> Let's take a look at the pods. All right. What do you notice it's different? --> I think that's their base, the NI. --> So, MiniCube, I believe that's their base, Z and I that they install. --> Yes, likely. --> So now we're going to deploy a daemon set. --> This slide is misnamed, but this is a daemon set. --> And it's missing the dot yamil. --> Let me see here. --> We're going to do fluent d.damson.com. --> I can't edit the slides. --> on what you see because it doesn't allow me to change anything, --> but I did change it on my other set. --> So I was gonna do an example with Loki for you --> or Prometheus, but it was too involved for a mini-cube cluster. --> So try to keep my examples fairly simple --> so you understand the concept instead of getting caught up --> early on and fixing manifest files. --> A lot of things actually don't work with a mini-cube cluster, --> What has to do with the way that they have a minimal resource because it's set up to basically test concepts on high availability nodes and multi-nodes. --> And so they pared down the feature set and they also do some Docker magic. --> So that keeps certain containers from running correctly. --> And you can put the hours in and engineer it, but I wouldn't recommend doing that for a mini-cube cluster. --> We don't use FluentD too much in Kubernetes anymore. --> It's a cloud tool, not a Kubernetes native tool. --> And Kubernetes native, we would use Loki and Promptail and Node Explorer and Grafana. --> Yeah, and then for your logs, Loki and Promtail are the Kubernetes native. --> native log shippers now if you don't want to pay for a solution. --> So open source. --> But that was too involved. --> I have to set up a minio and that's not going to work. --> So that was that was too involved for just this demonstration. --> Oh, sorry, there you go. --> Yes, if you control five minus F, then it's not ready yet. --> What do you think that might be? --> So we have three nodes, right? --> So each node has to download its own image. --> Okay, let's check the pods. --> Just make sure that the pods are downloading. --> All right, we might have an error in our YAML file. --> Let's pull up the YAML file again. --> Let's read, actually let's, let's get image, name, --> made as elastic, search query. --> That I-O. --> Slash FluentD underscore Elastic. --> So it's 4 slash Fluent. --> Oh, you forgot a D. --> So at the end of Fluent, there's a D missing. --> It's also in the name. --> The name should say Fluent D as well. --> Up at the top. --> All right. --> Save it and run it again. --> Actually delete it first. --> Well, delete it. --> Well, I think it's just called Fluent D. --> U.C. --> minus F. --> Oh, fluent. --> There you go. --> there we go now we can apply it all right now it's one of those checks again all right and --> you can go ahead and describe the daemon set the desired number of nodes three current --> number of nodes three so everything looks good on that you can see where it created the --> three pods each one has a hash so that it can track each pod individual link we're going to --> delete the daemon set and we're going to clean up the minicube profile all right in lesson three we learned --> the difference between deployments stateful sets and damon sets how deployments create replica sets --> and out of scale replica sets how stateful sets create unique pods with sticky names and how to scale --> stateful sets, and how Damon sets can be used for cluster-wide applications. --> Pods may take a few seconds longer to deploy in clusters due to downloading multiple images, --> one for each node.