2:14:39
2025-06-26 09:07:32
1:12:32
2025-06-26 09:11:34
6:42
2025-06-26 11:08:41
35:51
2025-06-26 11:24:37
38:41
2025-06-26 13:21:35
20:37
2025-06-26 15:06:35
51:46
2025-06-27 09:06:19
58:45
2025-06-27 09:06:25
36:01
2025-06-27 11:26:09
1:12:38
2025-06-27 13:45:09
Visit the Kubernetes Comprehensive 2-Day course recordings page
WEBVTT--> All right. Ready to get started? All right. Lesson six. Exposed applications using ingress. --> An ingress lets you map your traffic to different backends based on rules you define. --> The ingress management is handled through the Kubernetes API. Ingress uses an ingress controller. --> And this is something ingress can be confusing in Kubernetes. --> Kubernetes relied on an ingress API since its beginning. --> But that is now being replaced by the Gateway API. --> So Ingress EngineX, which is the ingress that we'll be using, --> is a type of ingress that uses the Ingress API and Gateway API. --> Its replacement is a type of ingress that uses the --> gateway API. So we have two different APIs, Ingress API, which we don't call Ingress API, --> we just call it Ingress or Ingress EngineX. And then Gateway API is an actual API. So you can see --> here, kind of how an ingress works. We have our client, a load and Ingress managed load --> balancer. We have our ingress, our routing rule, which goes to a service. And so we just --> spent the last section working on services, which in connect the pods. Any questions on --> ingress? All right. Gateway API is the successor to the Ingress API. However, it does not --> include the ingress kind. You must convert all your ingress resources. The Gateway API --> resources when migrating. So it's not a seamless migration. Gateway API requires custom resource --> definitions or CRDs to be installed first before enabling the Gateway API. Gateway API relies on --> on a gateway and HTTP route for the HTTP HTPS traffic. --> Gateway can share many HTTP route here. --> Are you still there? --> Okay, HTTP routes. --> So a gateway can share many HTTP routes. --> Gateway API solutions are typically implemented --> by C&I provider. --> Gateway API provides flexibility, --> standardization, and scalability over in. --> Gateway API has an anticipated upstream implementation date --> for the fall of 2025. --> So I have been using Gateway API for probably two years now. --> And it works great. --> They started with HTTP routes. --> And the final when they're completing is TLS route, which is if you are going to connect through --> the gateway directly to a database. So let's say that you were running a database cluster. It's just --> a database. I mean, you wanted to connect from another cluster. You could connect through with TLS to --> that. However, typically we in modern Kubernetes, we don't do that as much anymore. --> because we have pod-to-pod-pod node-to-node encryption so we would encrypt into the --> gateway and then we would use pod-to-pod no-denode encryption period. --> TLS route was kind of put on the back burner and finishing that up this fall. --> There's one more piece that I requested two years ago from the upstream --> Kubernetes team working on this and that is that if they're --> getting rid of the engine X ingress, we use that to protect management pages --> It has basic off capability. --> And so there needs to be an additional layer of off --> provided for management before they deprecate that --> and remove it from the Kubernetes SIDS. --> So I believe that is being finished up now. --> It's taking them two years since I requested that. --> And it will be, I believe, out this fall. --> And then this date keeps getting pushed, --> it looks like sometime this fall could get pushed to next spring and then gateway API --> should be standard within the Kubernetes cluster so alpha beta for the last two --> right let's compare the two so you can see you have a gateway and HTTP route a --> routing rule and to the service and the pods you can kind of see them overlay there's --> also a load balancer in front of the gateway and so it just doesn't have --> that in this line. --> CINIs. --> Kubernetes lets you use a container network interface or CNI plugin for cluster networking. --> CNI plug-ins are used to manage network and security capabilities in cluster. --> CNIs can be used to deploy the Gateway API. --> So here's an example of a CNI. --> CILM is a networking, observability, and security solution. --> Cillium provides a simple, flat, layer three network with the ability to span multiple clusters with a cluster mesh. --> Cillium can enforce network policies on layer three to layer seven. --> Sillium can act as a replacement for Coup proxy and includes advanced IP address management or IPAM support. --> Cillium can also encrypt node to node and pod to pod traffic. --> includes both ingress API and Gateway API. --> William works well on bare metal with KubeP --> and KubeP cloud provider. --> Metal LB had a bit of a falling out --> with several of the CNI teams a couple of years ago. --> And so you'll see where several of the CNI teams --> that are larger support Kubev and they were not as --> friendly to metal LB and that just gets to the nuances between different engineering teams working --> upstream on how they work with each other sometimes. And one of the things about coup proxy --> replacement, coup proxy fails. So when it fails, it's very inconvenient in the, excuse me, in the --> cluster. So a lot of the modern CNIs have a replacement for. --> group proxy and that will improve the reliability of your cluster once you switch over okay --> we're going to go to the practical application we're going to create an ingress --> so we want to make sure we have a fresh mini-cube profile yeah i think the data center that our --> vms are on is having some issues too not right no so we're going to create an ingress we need to --> to enable the mini-tube ingress. This uses the EngineX ingress by default. So we had the Admission --> Create, which completed, so that was probably a job, right? And then an Ingress admission patch, --> which is probably a job. And in fact, let's look at those. And then an IngenX controller. --> Let's look at all three of those. Describe all three of them. And they're going to be in the Ingress EngineX --> namespace who will describe you probably want to copy and paste each one of those --> names in I don't think it will have to do yeah and then describe pod minus in --> ingress engine X and then paste the pod name in and will be easier to control --> describe and then before you paste it in do the hyphen in and then ingress engine --> Next. --> We're going to cheat here and then paste it. --> Yeah, and we can just delete the last peak. --> Look at this and see what it is. --> Look to the top. --> All right, what is this? --> Admission Create, the web book. --> Let's see here. --> Succeeded. --> All right, let's scroll down a little. --> Ah, I did a search in. --> Okay, keep going. --> What else we have here? --> Interesting. --> It uses Ingress EngineX controller admission. --> Ingress EngineX for the namespace. --> Service. --> And okay, good catch. --> Yeah, that's the next thing I was gonna look at and see. --> And do we have a, do we have a delete in here, a TTL? --> Right, that's what we'd be looking for is a TTL. --> You see a TTO anywhere in there. --> All right, it's not a crone job. --> It's a regular job, so that means that's going to hang around without a TTO. --> So I don't see one in there unless I'm missing it, but that means that'll be in the cluster forever. --> So. --> All right. --> So you can look at the next one. --> You could get pods again and then and then copy that second one. --> Pretty sure that's also a job. --> has a hash at the end like a job would have that's completed but we'll see if it's a job or chrome job --> i've injected a volume with something yep there we go all right then let's look at the controller --> yeah yeah so before ingress engine x we need ingress engine x my hyphen in ingress engine x --> happens a lot i do that easy to do all right see what we have here --> It's a deployment. Looks like you can try to get all in that namespace. So there we go. We've got two jobs. They're completed. Didn't take long. We have a replica set of deployment. We've got two services. One of them is for the admission, the other for the controller. All right. We're going to check that the ingress is enabled because we installed ingress. So now we're going to take. --> checker. Just do cute control get ingress minus a and we'll see what comes out. We enable the --> engine X ingress controller and service, but no external ingress has been loaded yet because that --> uses the ingress A2O. That's deploy in application using the prior engine X app yamil file --> deploy the engine X app. Oh sorry, let's, well do apply my bed. I have an error. Use it with a service. --> All right. Check the pods. There we go. And they're running. Let's deploy a node port service using the prior node port service YAML file. --> deploy the node port service for the engine access. Oh, yeah, let me correct that. --> A second. Yes, you mark. And let's, yeah, we'll check the service. And we have engine X service, node port, no external IP. --> Okay. Let's get all pods and services within the default name space. All right, convenient way to pull up just the pods and the service. All right, we're going to test a node port service with the Engine X app. So MiniCube service, the service name URL. I think it worked, isn't it? --> All right, so we know that that part works. Great. Now we need to create an ingress. --> to create a service ingress YAML file. --> Notice how we've been using different APIs with these that are all on V1. --> Have you noticed? --> We've been using all kinds of different APIs. --> Yep. --> And they have alpha and beta as well. --> If you want to test out different, you know, alpha and beta versions. --> Yeah, it seems to look good. --> We're going to apply. --> All right, now let's wait. --> Let's wait a second, an ingress with a host. --> What is the host FQDN? --> Mm-hmm. --> All right. --> And now we are going to test the FQDN in the browser. --> Okay, we need to create a DNS record. --> So our DNS Resolver inside our cluster said, --> hey, engine X that example, I don't have anything in here. --> I'm listening. --> So when you, your browser typed it, it's going, --> going to the external resolver in that data center that this VM is hosted and we need to grab --> that before it because obviously there is no engine X that example so we're going to point engine --> X that example to the mini QIP so get the mini QIP address and then we're going to use the local --> Ubuntu DNS resolve with the browser now this should work on your host but they yeah they added a few --> things to these VMs to keep viewing. --> I guess, I don't know how much crazy stuff you could do --> while you're in a class, but let's test it with a DNS record. --> NS lookup engine, okay, so NS lookup engine X dot example. --> NS lookup into next dot example. --> Did you see it there when you catted it? --> One on two, oh, it may take it just a minute. --> Let's try again. --> Oh, it's going. --> external so it's trying to go yeah see they pointed this to okay all right on mine it actually --> returned it so that's weird they've changed something in NS lookup and your version on the VM versus --> my version on my own very middle all right now so let it happen okay what would a --> 503 imply is that it can't do something can't find something --> server error, right? So let's check on the ingress. I think so. I think you have to do --> engine X service, engine X service minus in engine X service minus in engine X. I'm going to do --> get get get get ingress minus a. See if we can find it first. Oh, we're in coup --> systems. Oh, yeah, but what's the name space? Mm-hmm. Uh-huh. And we have an --> address. So everything's populated. So now for any events, we're going to --> describe the ingress now a coop system right and what do we see and we're looking at our --> at our rules engine X that example and you see up there --> all right let's go take a look at engine X service now the ingress is called --> engine X service but we also have an engine X service for the node board so take a look at that --> and let's describe it the fact that you were able to pull that up without a namespace tells you what --> Okay, so what we're going to do is we're going to fix the ingress by changing the namespace. --> So we're going to delete the ingress. --> Yes, so the ingress works. --> You're hitting the EngineX ingress, but it can't find the service because the service is in the default namespace, --> and the EngineX ingress is in the Coup system namespace. --> You do an ingress per namespace unless you share your namespaces in Gateway API or share, --> you label your namespaces to have a shared gateway API. --> All right, so we delete it. --> that's a delay on these slides. There we go. We're going to modify the namespace to match the --> EngineX service namespace, which is default. Uh, no. Well, it's nice to have because you can, --> yeah, you can easily look at it and go, oh yeah, that's the default names. And we're going to deploy it. --> All right, let's check it. The service is running and it's connected. We verified that the service --> and the deployment are connected together. We have endpoints. It works. Okay, let's look for any --> Now let's go back and check. --> Let's describe ingress again. --> What do we have? --> Inpoints. --> All right, let's test our browser. --> Fingers crossed. --> Yeah, all right. --> All right. --> I'm gonna try and get through this before four. --> What time is? --> We got 10 minutes. --> How many more slides do we hit? --> We might not make it through all of this. --> We'll stop at 5 till --> and do a read. --> and do a review for everything Q&A. --> There's a lot here on the interactive, you know, --> so trying to make it through, --> and we wanna take our time going through it. --> So we go, all right, so let's ensure --> a fresh mini-cube environment, --> and we are going to have some fun with Cillium. --> No, it's been around for years, --> but it has everything out of the box and it's free. --> So some of the other ones that compete with Cillium, --> they charge for it. --> And it's not as complex as using something like an Istio with sidecars and, you know, it's --> very complex using Istio. --> So it's an easy upgrade from traffic. --> So if you're using something like traffic and you go to Cillium, it's a much easier upgrade. --> We need a lot of requests for Istio. --> We can't find anyone in a Kubernetes cluster as an engineer who can work on Istio. --> I'm like, yeah, I used to work on Istio five years ago. --> I don't work on it anymore. --> So yeah, do a stop. --> Yeah, do a stop and I delete. --> And then we'll start, because you probably have a container running in there. --> MiniCube stop is, I assume I haven't seen the code that is just Docker stop. --> And it uses Docker magic to grab the container name. --> There we go. --> All right, let's take just a little welcome. --> All right. --> All right. --> So you can see, we have three nodes. --> three nodes, cillium agent on every node. --> In fact, you can do Cube control Gipods --> minus A minus lowercase O, Y. --> There we go. --> Now we can see what node they're on. --> So you can see we have Cillium 5 is on mini cube three. --> We have Cillium F9 on mini cube. --> Cillium T8 is mini cube two. --> So each one has a Cillium agent. --> And we have a single Cillium operator --> running on mini cube, which is the control. --> playing, right? And then we have three Envoy pops. All right, sure a fresh minicube environment.