2:14:39
2025-06-26 09:07:32
1:12:32
2025-06-26 09:11:34
6:42
2025-06-26 11:08:41
35:51
2025-06-26 11:24:37
38:41
2025-06-26 13:21:35
20:37
2025-06-26 15:06:35
51:46
2025-06-27 09:06:19
58:45
2025-06-27 09:06:25
36:01
2025-06-27 11:26:09
1:12:38
2025-06-27 13:45:09
Visit the Kubernetes Comprehensive 2-Day course recordings page
WEBVTT--> Okay. Working with jobs and crone jobs for scheduled tasks. Jobs are designed for one-off tasks and after completion, and they will then stop. Jobs along with any pods will stay in place with a completed status until deleted. The event describing the completed status within the pod is typically deleted after an hour. --> However, the pods table will still show a completed step. --> You can use jobs to run a single pod or multiple pods in parallel. --> It is good practice to automatically clean up completed jobs. --> An example of running single-task jobs might be to enable metrics and add a metrics user to a database, --> along with the scheduled deletion of the job after 30 minutes. --> this example would be slightly complex and involves secrets with database credentials along with the metrics in point shipping and collection of the metrics --> it could take days or weeks to get everything set up correctly there are a few open source database providers --> with their distribution where they have a great product and the one thing that isn't great is that they don't allow metrics --> and so they want you to ship the metrics to their server in a foreign country. --> And these are mainstream distributions. --> And so when you learn how to use jobs, --> you can go in and turn on those metrics endpoints --> so that you can ship them within your own, you know, using Promptail, for example, --> in Grafana, so note exported. --> So one of the many uses for a job. --> And then in that case, --> the job would be scheduled for deletion after 30 minutes, so it would no longer exist when you did --> your Q-control keyboards. All right. Next, we have Chrome jobs. So Chrome jobs are designed for repeating --> tasks or actions, such as database backups, logometric shipping, report generation, or data's --> prone jobs can also be used to trim SSDs in a cluster of garbage collection purposes. Chrome jobs --> can be used to delete old snapshots where regular snapshoting takes place --> in the cluster using a CSI, which is a container storage interface. --> Chrome jobs use standard Chrome syntax. --> They can also use some of the Chrome macros like at Monthly. --> Chrome jobs create a new pod for each repeated tag. --> You can see here the standard Chrome syntax that you would use when you create your --> Chrome job and some of the macros and what they would look like. --> We're going to create a job. --> And we should have a fresh mini-cube profile. --> So we're going to just spin up mini-cube start. --> And we're going to create a job.gum file. --> Yeah, yeah, just mini-cube start for this one. --> Notice that the ready state of zero of one, --> showing the zero of one pods are running. --> And notice that the status status, yes, completed. --> So now let's check the job. --> It shows that the number of completions now is one of one. --> We're going to check the logs to see if it completed. --> We're looking to see if it was an output. --> All right, describe the job. --> Okay, see it's completed. --> Now, Q control, get pod. --> All right, so, and if you were to get pods minus A, --> do the same thing with minus A. --> And that is going to hang around forever. --> Right, so eventually this will clutter up your pod list. --> All right, so now we're going to delete the job. --> All right. --> You can check to see if it's gone. --> It should be. --> Oh, you did. --> Okay, there we go. --> So now we're going to look at, --> we're going to create a job auto delete YAML --> seconds after finished. --> Yeah, we could have it. --> This is good for muscle memory on YAML formatting. --> So this will save you out. --> This will save you hours down the road. --> Spending a few minutes now in the course, we'll save you. --> It is tedious, though, after a long day --> and you've been typing and you start miss typing, --> and that's when the mistakes happen with the yammy. --> You're at the end of a long day, --> and you're trying to fix something, --> and you just can't find it. --> Okay, so now, check the job status, --> and we'll wait 34 seconds. --> Control good job. --> Yeah, it's gotta wait 30 seconds after it finishes --> because it's still creating a container. --> I forgot your network is slow. --> So I think they have a slow network in their data center. --> So I forget which data center that Neil chose, --> but they have a slow connection. --> All right, so you can see no resources found. --> You can check the pods. --> Yeah, mm-hmm. --> Now, if you auto-delete a job and you would like a record --> that has been run successfully, --> what process would be good to implement in the class? --> If you see that, if you notice, --> there's no record that that has run now. --> So you can ship the events and logs, --> and then you can just look it up in there and say, --> yep, there's my record that it ran. --> So it shifts it right out. --> So even after it deletes itself, you still have it. --> All right, create a chrome job. --> This was a little bit different. --> So try to copy it over, but you might make a few mistakes. --> So it's up to you. --> Now, I have a stuck B key because it's a MacBook, --> so I get double B, triple B or no B. --> It might still be pulling, if you get the pods, --> it might still be pulling that image in. --> All right, so now let's go back and look at the pods. --> Okay, so we can see one there. --> And now let's wait. --> Notice the, I think we have to make wait a minute --> because we have to set to run every minute. --> So it's age 14 seconds. --> So we'll make wait a minute, --> and then we're gonna do pods again, --> but we'll note the last five --> alpha-numeric characters of the hash before we do that. --> Mm-hmm. And then we can also describe it again and that will show us that it created a new one. --> And we'll get up to three there. I think so. Yeah, I think it keeps up to three. It'll bounce between two and four. --> If we describe the crone job again. Now we have saw completed job, created job, saw completed job. --> And get pods again. Should have maybe a third one in there. First and second are --> go in there. No, I guess we're not quite at, um, at 103 seconds. You need to be at a hundred and --> probably 25 seconds. All right, now we have three now. The next one that gets created. Watch and --> see how that works. Not bad. Two minutes, five seconds, 65 seconds, five seconds. On the nose --> every 60 seconds. Perfect time. Oh, well, we're going to wait for three minutes. So let's --> run for two minutes, 46 seconds. We've got another 15 seconds or so. Yeah. --> Yeah, yeah. So it will automatically, okay, see we lost the Q5 BP2. So it will automatically make sure that no more than three are shown. So it garbage collects. And then you can go back and describe. There you go. And you can see successful job history limit three. If you look up at the top, it shows you the three most successful. And you could probably change that to one. I've never. --> messed with it but you could change it to one if you only wanted one to show in your --> cute control. All right any questions about that? Very very useful. I actually use Chrome --> Jobs on my CSI so my container storage interface and it allows me to garbage collect snapshots --> and trim the SSDs because I use encryption of my storage and for my clients and so it's all --> automated. It just does everything. Otherwise your your SSDs will --> fill up within about you know depending on what capacity you have it'll fill up with any more --> from a few weeks to a few months and then you'll run out of storage capacity even though you have --> all of your storage still available so you have to run garbage collect and delete your --> encrypted you do it encrypted trim all right so now we're going to delete the chrome job --> all right so let's review in lesson four we learned how to automate tasks with jobs and chrome jobs --> we learned when a job is appropriate and also when a chrome job would be applicable to a repeating --> task we auto-deleted a job and learned what happens to the logs and events describing that job --> having been completed and how to mitigate that loss of logs and events. --> We learned how Chrome jobs create new job pods and delete old job pods automatically. --> All right, any questions about jobs and Chrome job? --> All right, go on to DNS. --> In Kubernetes, yes, for sure. --> Oh, let's see. --> Yeah, you might as well just go ahead and create a fresh job. --> You know what? --> Yeah, I'm not sure. Let me see here. --> No, we'll wait because we're going to do a three-node cluster looks like. --> All right. Services. --> Services provide a method for exposing network applications that are running in one or more pods within the K-8s cluster. --> Each pod receives its own IP address within a node. --> When that pod is recreated, it then generates a new IP address. --> Constantly changing pod IP addresses present a problem with pod connectivity. --> Services enable a pod or a set of pods to be discovered within a Kubernetes network --> without the need to attach an ingress to each IP address individually. --> The service API within Kubernetes enables you to expose a group of pods over a network. --> This service object uses endpoints in a service policy to enable network access to the --> pod or group of similar pods. --> Service objects can be created, viewed, or deleted using the Kubernetes API. --> objects use a selector for scanning pods within the cluster for a specific label. So remember --> when we use labels before with our Damon set. This label needs to be unique for the pod or --> set of pods. Pod labels usually follow a naming convention such as app. Kubernetes that I.O. Name or --> something similar. Using the service object selector with app Kubernetes I.O. name, the service --> will then continuously scan for pods that match that unique select. --> Service objects contain port definitions under the spec port section, --> which contain the name, protocol, port, and target port. --> So normal conventions for name is HTTP or HTTP for a service configuration --> involving port 80 or port 443 and that would enable you to run both within the service. --> For protocol, the default is TCP. Port is a service port the client will connect to once --> the service is exposed. So that's the port on the service that you connect to. Target port is a --> container port within the pod that the service will connect to. Target port is a --> port can be a number or name depending on how the container port is defined within the --> pod definition there are three primary service types that are encountered cluster IP --> node port and load balance cluster IP is the default service type cluster IP also serves as a --> foundation that other service types build on no type definition is required if --> utilizing the default configuration is that will automatically assign an IP address for the service --> object. If you define the service object with spec cluster IP set to none, then Kubernetes --> will not assign an IP address. This is a headless service, which uses DNS instead of a specific --> cluster IP address to communicate with a specific pod. So you're telling it don't assign a cluster --> IP. We're going to use DNS. All right, node boards. When a service type is set to node board, --> the Kubernetes control plane will allocate a port from a range specified by service node port --> range flag. The default node port range is 30,000 to 32,000. When using a multi-node cluster with a --> node port service, each node will proxy that same port. --> number on each node into the node port service. --> Node port enables you to set up your own external load balancing solution outside of the --> Kubernetes. --> Node ports also enable the ability to expose a particular node's IP address directly. --> A particular node port may be chosen using the optional spec port's node port value. --> However, this requires that you also reserve the node port. --> port range to avoid port collisions. --> So it requires extra work if you want a specific cloud or bare metal load balancers --> automatically determine which pod to send traffic to through the load balancing --> service on bare metal when selecting a service type of load balancer and external load --> balancer must be set up first with a pool IP address range. --> This enables a load balancer to assign IP addresses to the service object with a type of load --> Balancer. --> Popular bare metal providers are Kubevip and Metal El. --> Bare metal load balancers watch for service type load balancer and automatically assign an IP address along with the load. --> MiniCube uses bare metal provider Kubev for its control plane load balancer when used in HA mode. --> However, Minicube then uses a specific tunnel protocol for assigning load balancer IP addresses. --> or service types load balancer due to operating within the Docker network. --> So this is different than the KubeBit load balancer IP for the control plane. --> So KubeBit provides a load balancer IP for the control plane and high availability, --> but then you need to also provide a specific load balancer item. --> So to set up an actual load balance, we can also use a mini-cube add-on for metal LV. --> We will need to provide a pool IP address range to the metal LB load balancer when setting that up. --> On a cloud provider, the cloud provider manager will interact with an external load balancer API --> to provision a load that provide the IP address to the load balancer service. --> You will then populate the external IP field. --> A headless service is used to interface with other service discovery mechanisms such as DNS. --> A cluster IP address is not allocated and coup proxy does not interact with the service. --> To create a headless service, set the spec cluster IP value to none. --> Headless services report the endpoint address of individual pods using internal DNS records --> served through the cluster's internal DNS service. --> The service services are useful for database access where there is a primary and multiple secondary --> nodes. Headless enables specific database pod queries or writes using a service name address, --> regardless of the pod restarts and generates a new IP address. Another way of looking at --> headless services is that it provides a sticky connection to individual pods, regardless --> of the services are typically used with state workloads can use DNS within the cluster to --> to discover services. --> Kubernetes creates DNS records for services and pods. --> Publet running on each node handles DNS configuration for each pod. --> DNS enables running containers to look up services by name rather than IP address. --> Namespaces are important when using DNS queries. --> Imagine if you had two stapleset databases running in two separate namespaces. --> The DNS service string might be similar except for the namespace. --> Prairie strings contain a service name and namespace. --> It's under the format, --> servicename. --> .Namespace. --> .service. --> All right, we're going to spin up a three-node cluster. --> I don't remember if these are this slow spinning up the other day when we tested it or not. --> All right, using the prior EngineX deployment. --> Deploy an EngineX replica. --> Next replica set. --> All right. --> Cluster IP service. --> One is the type noted. --> No. --> And what is the, yeah, that's correct. --> And what is the port for the service? --> And what is the external IP? --> Yep, you can try it again and see if it's not. --> I'm going to describe the service. --> All right, what is the IP address assigned? --> Okay, so we're going to open a browser and navigate to the IP address using HTTP. --> So not HTTPS or HTTP with the port shown. --> So there should be a browser on your desktop. --> I have to search for it. --> Yeah, there you go Firefox, huh. --> That's interesting. --> It was on the desktop online, but it's not on the desktop on you. --> And then what you can do is you can shorten the height of the terminal, --> so move the terminal to the top of the screen, --> and then just shorten the height to maybe half the height of the terminal. --> terminal keep the width the same and then put your browser in the bottom part maybe --> and then make the browser just like a third of the screen high it's tricky trying to do all --> this in one small screen but we actually don't need that much for the browser showing you can --> actually shorten it down a little bit more and make the terminal a little deep and then we need --> the port you may have to open a new tab and get rid of that tab that's a start page tab let's open a new --> tab and then delete that tab and try it. All right, if it's going to time out, it'll time out here soon. --> It uses about a minute. Okay, so what happened? Well, we need an external IP address to access --> the service from the browser. So the cluster IP address is an internal IP address that is not --> directly except we're going to delete the cluster IP service. Okay, so cluster IP services have, you know, --> They have their usefulness when you're communicating pod to pod. --> And in this case, we need to be able to access within the cluster itself from outside. --> So I'm going to create a node port service. --> There you go. --> Looks good to me. --> Yeah, so with a node port, the control plane said, hey, here you go. --> Here's 30,000. --> It's hard for me to read, but 30,000, 784. --> This is your node port. --> And if you had multiple node ports, they'd all be on the same thing. --> same thing. Okay, so now we're going to get the URL for the node ports. What happened? Go ahead and test that in the browser. Yeah, go ahead and test that in the browser. That stream with HTTP. Yeah, the cut and paste in these BNC terminals sometimes leaves the little B design. Why is it doing? Why is it doing that? Oh, because it's grabbing your name. See how your name is following your cursor? --> So you actually copied your name from the what's following your cursor. --> All right, there we go. That's what we were looking for. So unable to connect, right? --> Yeah, why is it? It says exiting due to service unreachable, service not available. --> No running pod for service engine X service found. Well, let's --> All right, so let's check the node for it, sir. What are we doing here? So let's look at this carefully. --> Let's look at a few things. --> Let's look at the labels. --> Do we have any labels? --> Do we have any selectors? --> Okay. --> Do we have any endpoints? --> Mm-hmm. --> All right. --> Now let's check the Inzex deployment. --> All right. --> Let's scroll up a little bit. --> All right. --> What is our label? --> Okay. --> And scroll up to the node port service and what is the select? --> Uh-huh. --> That's correct. --> I would use the full because the app. --> That Kubernetes that IO is the proper way to do things. --> Yep. --> All right. --> So we're going to fix it. --> Copy enginex deployment. --> YAML, well, first let's, you can go ahead and delete the current deployment where it gets confusing. --> I'll go a little easier on you. --> Okay, so we're going to copy EngineX deployment. --> YAML to IngenX app. --> Go ahead and X. --> Oh, okay. --> All right. --> Three labels to reflect the correct. --> And they look good. --> I don't see any mousseks and so. --> So good. --> Okay. --> Match string string. --> App. --> Kubernetes, I have a name. --> Oh, do we have a space in there or something? --> Something's not right. --> Select is match template labels app. --> . --> K-U-B-E-O-N-T-E-E-T-E-S- that I-O-N-A-M-G-I-E-N-G-I-N-X. --> Okay, something. --> Template metadata label. --> Let me look at this. --> Must be a mistake in there somewhere. --> Oh, did you find it? --> would you find it? Okay, there we go. And see, that's where, you know, being able to type in your --> own animal and heat it is important. Because those mistakes, you will spend some kind of --> and efficient. All right, we're going to check the service, describe the node port service. --> Notice anything different. Well, the selector, we haven't changed in our service. So that's correct --> because we didn't label our service. We labeled our deployment, but we didn't label our --> service. Our service is using this electric to find the deployment and attach. --> Yeah, yeah, we have how many? Yeah, three to port 80, right? So it has now found the pods and --> grabbed their IP. Now we're going to get the URL for the node port server. All right. Now we're --> going to test it in the browser. It's wild. What do you know? He just created a service and now we can --> connect. All right. Kind of cool. --> Cool, huh? --> And you figure out how that works the first time, right? --> All right. --> Now we can all that work and that's what we're going to do. --> We need it? --> Yeah. --> No, so when you do, yeah, so when you do service, --> internet service URL, it's using the IP address of the node. --> So the node IP is 192-16849.2. --> And that's what MiniCube just assigns to Node 1. --> That automatically assigns that. --> Because it's that way on mine as well, I'm on a separate bare metal machine. --> So what you're doing is you're using the node IP address. --> And it does assign a cluster IP, so you can still edit from another deployment. --> So the deployment could hit internally the cluster IP. --> You just can't access that cluster IP external. --> So you could run EngineX deployment number two and say, okay, I just want that to query --> engine X deployment number one and you're going to query it through the service with the cluster --> IP and you can't do that. You just can't do it externally. Kind of cool though when you see that --> finally take place and you get your first service into the cluster. So we'll delete that --> and then we're going to have some more fun. All right we're going to create a load balancer service --> and so your deployment is unaffected. It's still running just fine, right? So it's doing --> what it's supposed to do in the cluster, we just don't have a service to attach to it. --> And then as soon as you get that typed out, then go ahead and deploy it. --> I'll be right back. --> So we're not using EngineExployment. --> Ymels. --> We're using EngineX dash app, I think, about Yamels. --> Yeah, yeah, good idea, though. --> It gets you're onto my ruse on how I teach you certain things. --> So I'll have to get more creative. --> And the other thing, what's that? --> Yeah, pending. --> Yeah, yeah, another set. --> So we check the service, we can see that it's pending. --> We still have a node port in 8080, right? --> And we're going to describe the load balancer service now. --> All right, so you can see we have endpoints, that's a good sign, right? --> What does endpoints mean? --> It means that your services is, has found your back end pods, right? --> found the IP address, so we know that connection's good. --> So when you troubleshoot, if you have endpoints --> and they match the number of pods you're expecting, --> then you know that that connection, --> that connectivity that is working. --> Okay, now, but we see pending, right? --> Check the Git service again. --> Let's see if it gave us an IP. --> Nope. --> Yep, all right, open a second tab in the terminal. --> There's gotta be a way to do that on that terminal. --> There you go. --> Open tab. Oh, create. So open tab, the next one, and then they took the little tab button away on that one. Interesting. Okay. Yep. Okay. So now we're going to create a tunnel. Remember we talked about before how MiniCube uses a tunnel to assign an IP address, an external. So we're going to run MiniCube tunnel. Of course, this differs from a real production cluster reload. Simulates a load balancer. And there we go. It says no errors. And it's telling us. --> the route and it's taken 960 to 192 16849.2 which is our node IP address. Awesome. --> Switch back to the other tab and get the load balancers. So yeah we have two tabs. Click --> the first one. Click the first one over here on the left at the top right below file. --> There you go. There you go. All right. Now let's get the load balancer service again. --> What change do? So the mini cube tunnel created a network route and the load --> balancer service was issued an IP address now it's a docker magic it's it's --> doctor magic that you'll never run into in a row cluster it's uh i i hate even using it but you know --> it's many games way all right so now in yeah let's uh external IP is populated in the browser --> navigate to http external IP 88 right all right is that the uh correct i --> I think it was, so 10107 is the correct one, right? --> Yep, yep, all right. --> So there we go. --> So we, yeah, so we assigned an external IP, Minicube, get this little fake load balancer thing. --> We're at it simulated Coupid cloud provider, where a cloud provider will provide an IP. --> And our load balancer worked. --> All right. --> So we have two ways we've learned, node port and load balancer. --> So we're going to clean up the mini cube tunnel in the second tab with Control C and then delete just that tab. --> So the second tab. Yeah, there you go. And then now get your service again. Okay, and now do mini cube tunnel cleanup. --> I forgot to do that before. Control C and do it again. MiniCube tunnel cleanup. Okay, control C. --> that's weird --> okay so we created --> a load balancer with --> tunnel --> tunnel --> so using the prior --> engine X app --> and load balancer service --> so both are still installed let's check --> again our load balancer --> and our engine X --> that we can just --> yeah get --> deployment --> service --> and make sure those are both --> running --> and it gets a service and pending right yeah all right so now we're going to do this with --> metal lb let's see if this works on years this works online yesterday so we're going to install metal --> lb there are several ways to do it this seems to work so mini cube add-ons enable metal lb --> all right cube control give pods in all namespaces so minus a yep you got a speaker --> and we've got a controller. --> All right. --> And we're going to get the minicube IP address subnet. --> And, of course, it's 19216, 49.2. --> So now what we have to do is we have to feed. --> Remember, we have to give it a pool, a load balancer IP range in the same segment. --> And so what we're going to do is we're going to take that, the first three octets, 192, 16842. --> And then instead of dot 2, we're going to do dot 50 for the first one and dot 60 for the second one. --> So we're going to give it a pool from 50 to 60. --> And we're just doing that because we're taking a wild guess that there won't be any collisions for the next 10th. --> Probably do this in production because you'll probably get in this will be good. --> All right. --> So it says everything looks good. --> And now, well, we're going to get this. --> We're going to get the service and see what do we have now. --> Yeah, kind of cool, huh? --> All right. --> All right. --> This is our config map. --> And you can see the load back. --> or address pool IP range that we created right now. --> Okay. --> Now we're going to test the load balancer in the browser. --> Navigate to the external IP with service port in the browser. --> 8080. --> Yeah, so we did it with MetaLB instead of tunnel. --> We skipped the tunnel. --> Now we're going to check the Engine X deployment by our label. --> We want to look for everything. --> It has a label. --> App.Cuburnetes.I.O. --> Name equals engine. --> There you go. --> We can see our podcast. --> You can say a deployment. --> I'm going to say a replica set. --> Now, the service doesn't have a label, --> but if you would have added a label to the service, --> because remember the labels are none, --> then it would show up as well. --> So it makes it nice for querying. --> So if you have a specific workload, --> you could give it all the same, you know, --> specific label, and it will pull everything up, --> even if it's in multiple namespaces, --> because you would just do, I believe, minus A. --> So I believe you can pull it up through across multiple names. --> I think that you could try it and see if it will look at a different or maybe going to keep --> bonus. --> Yeah, all right. --> So now, so the second part of this, now we have services down. --> Let's go to D. --> I'm going to create a DNS pod for querying. --> Yep. --> Continue creating. --> All right. --> We're running. --> What do we have? --> Everything looks good in our events. --> It tells us it folded. --> created a canaer, started the container. --> That looks good. --> Now we're going to practice a query. --> We're going to exec using the interactive terminal --> into DNS Utils Pod. --> And that might be, hang on, let me look at some. --> Okay, I think that's a single hyphen between NS lookup --> and DMS Utils. --> I think, yeah, that's unfortunately, --> occasionally my single hyphens and double hyphens --> turn into that. --> So it could be a double hyphen, I think it's a double hyphen. --> When you save it, it has a tendency to go and change that. --> So let's take it again. --> Yeah, we need two in there. --> Yeah, I believe so, but it might be in front of in this lookup. --> Let's see. --> Okay, yep, so that's the way it should be right there. --> Okay, that's correct. --> And so what we're looking for is the Kubernetes. --> the service of cluster. Local. --> And that's similar to when we would be using --> the headless service and we want to query --> through the headless service. --> It's gonna be a similar string --> just having the name of the pod in the namespace. --> That's a default namespace. --> So we looked at Kubernetes in the default namespace. --> And we got a Kubernetes service --> in a default name space service, --> because it is a service, we're querying cluster local. --> So that tells us that it works. --> It tells us that we use 10960 --> 10. And for our query, I don't know if that's the pod IP address. You can scroll up and see 1096010. I'm sure where that came from. 1096010. So give us a pod IP address. Interesting. Yeah. Oh, yeah. So you could do Q Control Git Pods minus OY. It doesn't look like it. No. So DNS Utills is 244.2.4. So anyway. --> Okay, so it tells us what node is on, node three. --> Okay, so now we've queried internally. --> Now, let's do an external. --> So let's see if this thing can communicate with the outside. --> All right. --> Yeah, so we have internal DNS, we have external DNS. --> And I will tell you something about DNS and Kubernetes. --> It is DNS hungry. --> If you have bare metal and you're using a DNS resolver --> between your bare metal and the outside world --> and your DNS resolver is weak, --> it may work perfectly on browsing or doing anything else --> in your internal data center. --> If it's weak, Kubernetes will find this weakness --> and bring it down. --> So that's something that I tell all of my clients, --> oh, I don't have any problem with my DNS. --> I'm like, well, with browsing, you won't notice it --> because it'll automatically send that query again --> and return your page. --> It'll just be a delay. --> It doesn't give you all of it. --> a pop-up that your DNS failed. But Kubernetes will find that weakness, and it'll break your --> DNS if you have a DNS resolve or if it's weak. You have a strong DNS, nothing to worry about. --> So what are the cautions of running Kubernetes heavy in an internal data? --> All right, we're going to delete the DNS UtilsB. All right, let's do our review. --> In lesson five, we learned how service objects work, the three primary types of service objects, --> cluster IP, node port, and load balancer, --> how to create a load balancer using the mini-cube tunnel, --> how to create a load balancer using metal lb, --> how DNS works within the cluster, --> and how to verify that the internal DNS is working, --> how to verify that the external B. --> All right, any questions on DNS and services? --> So it's kind of neat to see how that's all done. --> And a few more lessons, we'll get into Ingress in the fun of Ingress working with it. --> So I think we're ready for another break because we're going until four. --> So let's take a 15 minute. --> This is our last break for the day. --> And it's 255, so we'll come back at 310, and then we'll go from 310 to 4. --> All right. --> See in a few.