2:14:39
2025-06-26 09:07:32
1:12:32
2025-06-26 09:11:34
6:42
2025-06-26 11:08:41
35:51
2025-06-26 11:24:37
38:41
2025-06-26 13:21:35
20:37
2025-06-26 15:06:35
51:46
2025-06-27 09:06:19
58:45
2025-06-27 09:06:25
36:01
2025-06-27 11:26:09
1:12:38
2025-06-27 13:45:09
Visit the Kubernetes Comprehensive 2-Day course recordings page
WEBVTT--> All right. So we already worked with config maps while enabling the mini-cubata on metal LB. --> If you remember, we viewed that. --> Config maps are API objects used to store non-competential data in key value pairs. --> Pods consume config maps is environment variables. --> Config maps do not provide secrecy or encryption. --> If you want to store data that is confidential, use a secret rather than a config. --> The maximum size for config maps is one mega. --> ConfigMax have both data and binary data fields. --> The data field contains UTF8 strings, while the binary data field is designed to contain binary data as base 64 encoded strings. --> Secrets are objects that contain sensitive data. --> Secrets keep our confidential data out of the application code. --> Secrets are similar to config maps that are intended to hold confidential data. --> Just like a config map, the maximum file size is one megabyte. --> Secrets can also be used to provide environment various. --> Secrets by default store binary data under the data key. --> Again, binary data is base 64 encoded. --> However, secrets can also store stream data under the stream data key. --> So occasionally you have a workload that cannot read base 64 encoded data. --> So in that case, if you're using the secret, you would store it under a string data key. --> And that does happen in case. --> Secrets are stored unencrypted by default. --> So when you have a default deployment or this mini-cube cluster, for example, our secrets are not encrypted. --> Secrets are distorted in the EtsyD clusters, key value pairs. --> Encryption at rest should be enabled in production on Etsy and for secrets. --> Kubernetes distributions such as RKE2 encrypt, Etsy, and secret. --> And secrets enable Kublet to pull container images from private registries. --> So you can also create a secret. --> There are multiple types of secrets. --> As we just mentioned, pulling from a container registry, --> the old legacy method is Docker config. --> The replacement method is a secret with Docker config JSON. --> Not all repositories will work with Docker config JSON yet. --> Some of them still need the old Docker config. --> So both are still available in Kubernetes. --> So it has basic off. --> So if you are using IngenX Ingress with Basic Off, --> you would use a Basic Off secret. --> SSH off, if you need to store your SSH keys. --> TLS off, if you are embedding a TLS cert --> and a TLS key into a secret, you use the TLS. --> And there are additional types available --> that you won't run into this often, or maybe not --> All right, any questions so far about secrets? --> All right, container storage interface. --> So part of persistent volumes. --> Container storage interface or CSI --> is the Kubernetes standard method for accessing block stores. --> And it is used for enabling container A's workloads --> to specifically access block and file storage resources. --> In a production cluster, --> there are options for full featured CIS. --> One of the better CSI options is --> a product called Longhorn, which is also free open source. --> And Longhorn CSI enables important features such as high availability, snapshots, --> encryption at rest using Lux encryption, trim discard, logging, and minicube enables volume snapshots. --> And it uses a basic implementation of a CSI spec known as the --> CSI HostPath driver. So they do have a CSI that they now get to persistent volumes to utilize the CSI. --> A persistent volume or PV is a volume that is provisioned in cluster. It can be provisioned manually --> or dynamically. It is a resource, just like a node or pod, is a resource. PVs have a life cycle --> that is independent of the resource requesting it, such as a pod. --> So this is a similar concept to creating an external volume in Docker. --> So when you Docker Compose up, for example, --> and so you're installing a MariaDB database, --> and it will create a volume that's managed by Docker Compose, --> the state. --> And then when you Docker Compose down, --> it deletes that volume. --> With Docker, you can do an external volume, --> and then Docker Composed does not manage the state of that volume. --> It simply reads it or writes to it. --> A persistent volume claim or PVC is a request for storage. --> PVCs consume PV resources or persistent volume resources. --> PVCs can request specific size and access modes. --> modes. It can request two gig and a specific access mode. Access modes can be read write once, --> read only many, read write many, or read write once. All right, any questions about c --> and i's or cc's or pvs. Okay. All right, make sure you have a fresh mincube profile and we're just --> just going to spin up a basic cluster. --> All right, I'm going to create a config map at me. --> Made this file easy for you. --> I think this is the simplest one yet. --> These can get complex, but generally. --> Sticky slide here. --> Verify the config map does apply. --> And we can describe the config map. --> There we go. --> We've got two data points shown. --> K8's admin for the username. --> No binary data. --> Now we're going to create it. --> job to access the config back. Okay, I think ENV from, let's see. So under command, look at --> everything under command. Look at the first item under command. That looks good to me. YAML formatting, --> man. Fortunately, Coup control gives you some great feedback. But in the helm chart, not so much. --> All right, so we got to create the namespace, right? Let's do that real quick. We're going to view the --> pod events. What do you think that might be? That's correct. So you cannot read a config map --> from another namespace. Interesting. And delete the original config map and verify that it's been --> deleted. I think so. Yeah, I believe that is correct. And now we're going to describe the --> config map just to make sure it's correct. And you see something there. You see it says data --> to so that tells you have two fields. If you're expecting five and you see two, that's a good --> Let's find it. --> It's another troubleshooting technique. --> Okay, there's a namespace. --> Now we need to delete the config net job --> and then reapply it. --> Verify the pod status. --> Okay, because, yep, --> that's when we've designed a lot. --> So we're gonna view the logs for the desired output. --> All right. --> So what did we put in there for our Envy bars? --> We had two bars we put it in there, right? --> So we had an access level. --> Yep, both of them are in there. --> All right. --> It was the fastest way to remove all resources for this practice. --> And so we eat the namespace, that's right. --> It gives you minutes when you're testing. --> Minutes that up, right. --> Practice to create a base 64 encoded string of --> password. Okay, now we're going to decode it. Now, so you can see, encoding doesn't --> encrypt it. Anyone can decode what is in the stream. All right, let's read a fresh --> mini-cube profile, create a single node mini-cube cluster. So weird, it says minicube 1.36.0 --> on Ubuntu 2404, but when you query it, it tells you're using Ubuntu 22. --> Yep. All right. We are going to create a local, self-signed search for an ingress secret. --> We're going to create a secret for ingress. All right, I'm guessing I did it. So it looks like normal output to me. --> So now we're going to take that, which we created at tls.ckey, a tls.sert. We are going to turn it into a secret. --> Now we're going to check that it was created. --> All right. --> Ingress PLS. --> Now we're going to give you the secret. --> All right. --> Tell me what you think about that. --> What are we looking at here? --> Yep. --> And they are base 64 encoded. --> And so what happened is when we told it to create a secret, TLS. --> If you remember back to that Kubernetes.io, --> forward slash TLS. --> So we told it to create the type of secret TLS. --> And it automatically knows to that. --> and take and throw that as base 64 and code a data --> into the TLS cert and TLS key fields that we can't read it. --> So to read them, we need to base 64 decode them. --> It's great, it shows us we have a TLS certain TLS key, --> but we need to see what is actually in our secret. --> Okay, so you can see that got us one step further. --> We have, you can see that they're in certificate format now. --> So we take it from base 64 encoded to certificate, but it's not in the open SSL X 509 format. --> So now we're going to base 64 decode and output. --> The first step is we're going to read the TLS. --> We'll find JQ is used quite frequently in reading data in Kubernetes clusters, --> parsing data. --> All right. --> What is the not after date? --> Yeah. --> And what is the issuer? --> Yeah. --> Does that sound familiar? --> Alright, do the same thing. --> Credit key. --> Slightly different. --> I'm going to use P-key instead of X-519. --> No, it's slightly different. --> We're using TLS. --> And then P-key. --> Change X-509 to P-key. --> Okay, how many bits in this? --> how many bits in the private key? All right. And now that we've just demonstrated that both --> the certain key are valid, we can use them in our cluster ingress. All right, so we have a base --> mini-cube cluster. Make sure it's up at running. There we go. We're going to configure --> ingress. We're going to do the add-on for configure ingress. The, yeah, poop system, --> and then we named it Ingress TLS, the name of the sequence. Now we need to --> enable the ingress. Okay, now we can get the ingress and there's no ingress bound, right? --> And that's because we need to create an ingress, a kind ingress that will communicate with the --> ingress API. So we have an ingress in place, but we don't have an ingress in the ingress --> controller, right? So now we're going to create a TLS as ingress. --> file and you'll notice some differences here and what we've done before when you get --> this up go ahead and deploy it I'll be right back so let's see let's try it again --> here the ingress okay all right now let's open a browser and test try htttps --> engine x dot example all right now let's describe the ingress you have --> an ingress and see we obtained our TLS search. All right, so let's go ahead and describe the --> ingress. Ingress. And then, there you go. Yeah. Yeah. All right. Okay, let's take a look --> at our services. Yeah, we don't have a service, do we? Yeah. Yeah. So we need to deploy our --> node port service, which is going to be node port dash service. Yeah. So it should still be there. --> in your tour every day now check your services all right we've got engine X service up and it's got a --> cluster it's got a node port okay so now let's check let's describe our TLS ingress again --> still says not found let's describe our well let's describe our service let's see what our service --> looks like now all right if you notice anything different anything else yeah so the ingress is --> for a for a service that's in the coop system namespace right and so what we need to do is we need to change the service to be in the gillus ingress namespace right so we're going to bring that service down so we're going to delete it and then we're going to modify that file and add the coop system namespace if I believe that's node port service --> I think that's what we just deployed minus us. --> All right. --> We'll modify it and... --> Yeah, and we'll add the kube system for the namespace. --> We've used other namespaces before, but this time we're sticking with kube system. --> Now, in a production cluster, you would not want your developers --> if they're deploying applications to work within the kube system namespace. --> But in our case, we're having fun today. --> today right now let's check our ingress again describe our ingress now we're going to describe --> there we good all right what do we have we have an endpoint okay so notice that we have an endpoint --> right engine X service 8080 nothing else beyond that okay so now let's go describe the service --> well what else is missing only thing there without a value well let's --> The only thing they're, yeah, there's no endpoints, right? --> And so in this case, the ingress terminates the TLS at the ingress, --> and then we go through the ingress, unencrypted to the service in the pods. --> So we would still be using 80 for our target report. --> We're using 80, and then we're targeting 80, but we're targeting 80 on what, --> targeting targeting 80 on it. --> an app with a selector, and so we're looking for an app named EngineX, and we don't have --> one, so there's no Ingrams, right? So now we need to deploy the IngenX app. So I take you through --> this a little bit backwards than I did on a previous lesson. Right, but in this case, we're --> actually doing a little bit of troubleshooting. So we're starting with our ingress, and then we're --> going backwards. That way it kind of tweaks the muscle memory a little bit. So my slides will be on --> up sync here and match up. --> Put it up on you a little bit. --> All right, so we need to add a namespace to this. --> So what namespace do we need to add? --> Where do we need to add it? --> Yep, that's correct. --> And we'll deploy that. --> Let's make sure it's deployed. --> Yep, it's running. --> Let's see, no. --> Where is it? There we go. --> Okay, so now let's check the service. --> Take a look at the service. --> What do we see? --> All right, now let's go look at the Ingress. --> ingress. Notice how the endpoints populated over. And why is that? Because the ingress --> controls the load balance, right? So it needs to be able to provide that information. So we're hitting --> the internet service on the 8080 and then the service is hitting the individual odds on our IP address --> on 48. So for troubleshooting when you're troubleshooting ingress, if you start at the ingress --> and work your way backwards. --> Sometimes that will help you with figuring out what is missing, --> what isn't connecting, what isn't working the way. --> Okay, so that put all my slides out of wax. --> Let me get them caught up here. --> I just wanted to switch it up on you --> because you were starting to figure out my technique. --> So I said, I didn't let's switch this up a little bit. --> Okay, and so now go to the browser and let's just see if we're all right. --> skip product here on the slides there we go welcome all right so how do you feel about about encrypted --> ingress and the way that worked out yeah and so just remember you can know all right so now we're --> going to create a docker config secret and you will use this for pulling containers from your --> own registry so if you have your own private containers so kubernetes will pull it in --> private registry if you're using the newer system something --> I'm not teaching you the older system. --> I'm teaching you the newer system, --> which is more slides or grows up here. --> And so instead of having this with the black background, --> it didn't show up, so I did it a little differently --> this time, so easier to read. --> Okay, let's check the secret was created. --> All right, okay, let's describe the secret. --> Well, it's not really helpful. --> All that tells us is that there's a dot, --> Docker could take JSON in there. --> They have 145 bytes. --> What's in there, right? --> We'd like to know. --> Inquiring minds, it won't connect. --> I have an image poll there. --> Improper credentials. --> What's in that? --> All right, take a look. --> It's not real helpful either, is it? --> It's all encoded. --> So we can see what's in Doctor can take JSON, --> but it's all still encoded. --> So what do we do? --> Right? --> Yeah. --> You encoded secret data, and you can hit enter to get a clean line if you want. --> Yeah, right? --> It decoded everything except the off. --> Well, that's not helpful. --> Now we need to base 64 decode the off, right? --> How do we do that? --> Well, remember when we went through the previous exercise and we encoded a string and decoded a stream called password? --> Let's do the same thing. --> Remember how to do that? --> do that? All right. Well, that was anti-comatic. It's just the username and password. That's how --> it stores it. So now you know how to create and read a credential secret to access a private --> repo. So if you have your own images that you're pulling into Kubernetes and you're using your --> own private repo, this is how you would connect with a modern system or if your system is still --> using Legacy, and instead of using a Docker config JSON, you just use a Docker config. --> All right. Next one, we're going to ensure a fresh mini-cube environment and create a persistent --> volume clean. --> You're right. As soon as you do delete, it disconnects. And that's because it's resetting the Docker --> network, I think. So Docker is gaining control over. --> the host network, which is what it does. --> It has root access. --> Docker, yes, Docker does that because Docker is a development environment, --> it's not a production environment. --> So it gains control, root control overall process --> as related to the network so that you don't have any issues --> when you're spinning something up. --> Because they don't want to listen to developers all day. --> They say, hey, my Docker for Mac or my Docker environment --> won't let me do this, so they just enable fruit access --> and it takes control over every day. --> When it just works, like Dockermann. --> Definitely don't want to run that in production though because it overrides firewalls and everything. --> Okay, now we need to enable our storage add-ons for the snapshot. --> All right, what do we have going different here? --> Snapshot controller. Let's take a look at that. --> Looks like it. --> So it's a replica set or we'll find out? --> Let's scroll up to the top there. --> Let's see, it's a replica set. --> replica set yeah okay so it's got two now it's got a leader election so it's high --> availability nice ensures it has one always running okay right now we're going to --> enable a storage add-on for CSI which is our container storage interface so --> that was our snapshot controller which is important --> have and now we're going to install the container storage interface. This is a simplified --> CSI. If we were doing complex, we'd be here through the middle of next week. We're going to keep it --> simple. Yes, like if we were doing long more setting it up the first time, --> ooh boy getting everything done right, it's about two weeks of times. Oh, what's going on? A lot going on, --> Uh-huh. --> All right, is everything else? --> Yep. --> Let's look at the node. --> Let's do what's going on in the node. --> Correct six containers in that pod. --> We can come back and look at that one in just a second time and see what all is involved in that. --> No requests or limits for anything. --> Well, that's kind of lazy as developers. --> Why not, right? --> All right, we'll look at that. --> We'll take a look at that one with six containers in the bottom. --> What do you see? --> Yeah. --> Every time I do that online, I have proxmox, and it does something really inconvenient that I think it out of. --> Well, we had to pull six images at least, right? --> Mm-hmm, mm-hmm. --> We've got live-ness probes, so let's scroll up to the top. --> It's going to be a long one. --> Here we go. --> See what we got in here. --> It's a demon set. --> Interesting. --> What does that tell us? --> Mm-hmm. --> Why might we not want that to take place? --> Yeah. --> Right. --> If we're actually doing a production cluster, --> we would definitely not want this --> to be on a control plane node. --> And ideally, we would not want this on a worker node. --> We would want this on its storage node. --> So we would force this to a node --> role of storage in high availability and that way it stays off the control plane it stays off the --> worker node we would taint the storage node and then we would tolerate it so it would select storage --> storage at a node role to storage select storage node we would taint the storage node and then we --> inside each of the pods we would tolerate that taint so that it forces --> it onto the storage node but doesn't let the workloads that you're running the --> stateless workloads deploy on the story. Yeah, a little complicated, but that keeps your --> storage separate from your status. Good. Let's get the class storage name. Stories --> class. Mm-hmm. Yeah. Yeah. Where's our storage class? --> Now I'm going to create a persistent volume claim. And remember, a persistent volume claim --> claim makes a reservation, which then creates the persistent ball. --> As soon as you get it done, I'll have you deployed it. --> You can see this is not the default class, so we have to tell it which storage class name. --> If you set it up as the default class, then you would not have to give it this storage class name. --> It would just use the default. --> So in my clip, I think you're missing an S. --> Yep, we'll give it a minute here. --> Give it a minute here. --> There we go. --> So it's storing it on the node, --> and the storage class determines how it's stored, okay, --> and how it's controlled. --> So I'll use Longhorn as an example. --> Again, we're using a, even though it installed a lot of images, --> we're using a very abbreviated version of a CSI --> or container storage interface. --> So with Longhorn, we would have two classes. --> So we would have a standard class and we would have an encrypted class. --> And so we would set the encrypted to default. --> So anything new that's deployed with that storage class controller would deploy on the default storage class controller, which is encrypted. --> So everything that would get in deployed automatically, so you're deploying a database, you wouldn't have to put that in there. --> It would automatically deploy it in the encrypted class. --> which would set your PVC to encrypted. --> So it controls the settings of how you're creating, --> how you're implementing the persistent volume claim. --> And then we would tie those to a specific namespace. --> So now we're going to check, what's that? --> Let's see, the storage class is CSI HostPath SC. --> So we did in our persistent volume claim. --> So that is pvc.e our storage --> class name. Yep. And so if if we had a default, say we were using Longhorn and we hadn't --> encrypted, we wouldn't need the storage class name because it would automatically assign --> you. And in that case, if we wanted unencrypted, we would assign it Longhorn unencrypted --> CSI. Otherwise, it would install it encrypted. So yeah, we use PVCs and a lot of the database applications --> will automatically provision a PVC, or you can provision it yourself and tell it what the --> PVC is, or what the persistent volume is. So you can provision your own persistent volume claim --> and then feed that into the database. And feed that into the configuration file. All right, so if you do --> both of them together now, KC, get PVC. Notice you don't need a namespace for the volume, --> but you need a namespace for the persistent volume. --> All right, let's see. --> So now I'm going to get a volume snapshot class. --> Let's see what that looks. --> So we have a CSI HostPath snap class for that. --> All right, so now that we have that, --> we know we can create a snapshot. --> So let's create a snapshot. --> So we have a persistent volume claim. --> We have a persistent volume created by that claim. --> And now we are going to create a snapshot of a --> snapshot of that volume. --> Well, that's why we go to the exercises of typing out the M, because this will save it a lot of time later on. --> All right, the volume snapshot. --> All right, it's the volume one there. --> There's our snapshot. --> We've snapshoted it, and now it's backed up in a way. --> It's not quite the same to the backup, but... --> All right, we're going to restore a snapshot of the volume. --> This could be fun. --> All right, we're going to create a CSI-dhistore. --> YAML file. --> What are we doing here? --> We're using a persistent volume claim of PVC. --> We're creating a persistent volume, --> but we're using a data source of Snapshot Volume 1 --> to turn that into a new volume. --> All right, so now let's apply it. --> Okay, let's check the PVC for the restored snapshot. --> Okay, what happened? --> There's our original PVC. --> Well, we don't have a new one in there. --> Why is that? --> Check persistent volume claims for all. --> And let's get pvc minus a. --> Mm-hmm, give us a second here. --> Getting a fictitious gigabyte. --> It's got to go through all of the events and reporting --> and then give it back to us through the Kubernetes API server. --> A lot of stuff behind the scenes go on. --> We'll give it another second or two here --> and see if it can get itself out. --> Answer that as no. --> You're going to delete the failed restore. --> I'm going to change the name space. --> Now, to match the existing PV, --> and apply the modified restore file. --> So we're going to edit that restore file. --> It's already done. --> How do you feel about persistent volumes, --> persistent volume claims, --> of container storage interface works oh right right yeah yeah so that's the original way --> NFS is the original way with Kubernetes because we didn't have a CSI and now with --> modern Kubernetes we use CSIs to manage our claims whether it's a simple CSI which is what --> we've just demonstrated which is far simpler than what longhorn would involve or it's a --> longhorn which handles all of this for you with a GUI and encrypts everything and manages backups --> automated manages snapshots automated um yeah so we have a simple and longhorn will be complex on that --> and that's the modern way for but you can still use NFS and you can obviously still have a hybrid --> cluster where you run your stateless workloads on your cluster and then connect to a managed --> database or an external database for sure all right so in lesson eight we learned about --> config maps secrets and persistent volumes you know went a little beyond persistent volume --> config maps or API objects used to store non-confidential data in key values. --> Ogs consume config maps as the environment variables. --> Maximum config map size is one meg. --> Config maps do not provide secrecy or encryption. --> If you have confidential data, use a secret for that. --> Config maps have both data, --> and binary data fields, data fields contain UTF8 strings, --> binary data fields contain base 64 encoded stream. --> We learned how to base 64 encode and decode strings. --> We learned how to create a secret. --> The maximum secret size is one meg, one megabyte, --> just like a config map. --> We learned how to base 64 encode both a TLS cert and a TLS --> key. We learned how to read a secret that is base 64 encoded. We learned how to read an x509 --> cert and key and a secret, both in certificate and x509 format, which is very useful. And we learn --> how to create a Docker config config secret. We learned how Docker config JSON secrets are used --> to access private registries and how to read and decode a docker config jason secret how to create a persistent --> volume claim with its thought persistent volume how to create a snapshot of a persistent volume claim --> and how to restore a persistent volume claim from a snapshot all right any questions on less than --> right we need to go ahead and take our lunch for 30 minutes we'll be back at 1.45 approximately --> would be a minute before all right see at 145.