2:14:39
2025-06-26 09:07:32
1:12:32
2025-06-26 09:11:34
6:42
2025-06-26 11:08:41
35:51
2025-06-26 11:24:37
38:41
2025-06-26 13:21:35
20:37
2025-06-26 15:06:35
51:46
2025-06-27 09:06:19
58:45
2025-06-27 09:06:25
36:01
2025-06-27 11:26:09
1:12:38
2025-06-27 13:45:09
Visit the Kubernetes Comprehensive 2-Day course recordings page
WEBVTT--> That's it. --> Should be good to go. --> Yeah, I think this was re-created. --> I saw it being rebuilt, so I don't think that there's any mini-cube running, but, --> okay, yeah, I think it was just booting out as for rebuilding it. --> Okay, often running. --> Good. --> I mean, I've, you know, I've gone through Kubernetes, you know, self-paced training classes in the past several times, you know, to kind of be in a better position to support the Orca clusters at work. --> So, I mean, I've been over that material, a lot of that material before. --> so I do have a pretty solid grasp of it. --> There were some things yesterday, well, a number of things that I wasn't aware of, --> but good to add to the toolbox there. --> Oh, yeah, right. --> Yeah, the courses that I went through were aimed at, you know, --> getting the CKA certification. --> You know, I just as a my own personal type of training method, I mean, I like to, I like to, you know, do all sorts of different things to train myself. --> And I had a, I don't know if you've ever heard of Linux Academy, I'm sure you probably have, that turned into something else and something else. --> and a cloud guru merged with whoever or whatever. --> Anyway, I was, I had an A cloud guru subscription, which had a, really the only reason that I had that was for the ability to spin up like six-hour AWS environments or Google or Azure. --> environments and, you know, do, you know, limited things in those environments. And I've --> built out or automated a method of installing or building a Kubernetes cluster in --> AWS, not using EKS, but just using EC2 instances. And, you know, worked on, worked on all --> the class material, you know, the CCA course materials that way and work for flow balancers. --> Yeah, I did that for quite a bit. --> That was several months worth of that. --> And I did not proceed then to get the, to take the certification test. --> because yeah just you my understanding is that it's extremely cumbersome and difficult to actually --> get set up for that and it's not um not very user-friendly and there's a good percentage this may --> have changed but i i had heard there was a good percentage of of people that that had to --> continuously rescheduled because, you know, you had to have your desk a certain way and --> you had to have a certain kind of lighting in your workspace, stuff like that. --> So anyway, after, right, they don't have, you know, at the time, this was like a year ago, --> they did not have any type of on-site testing. --> It was all, you know, remote. --> But after this, after this, I will probably just bite the bullet, --> go ahead and get that done. --> Yeah, I had, like, so I didn't even use the, like I said, --> I didn't use the KS. --> I just built a huge, --> Bash stripped that did it all. --> Anyways, it looks like we've got some kind of error here. --> So it's out of the buffer there. --> Well, I mean, maybe it's, uh, let's, let's, let's, let's, let's, let me try again. --> Do we want to do just a mini-cube start to see if it'll just do a simple, um, --> oh, okay, so this was intended. I gotcha. --> Gotcha. --> out before then okay well I'm sure I can change the go ahead sorry well if you want --> to continue to try to go down that path with the error and look at that is there a --> way to I'm trying to see if there's a way to increase the buffer size on --> this but I guess profiles maybe for up references okay it's locked down --> don't have the ability to do more than three out --> all right so go ahead and just proceed with with this then try to control c out if it --> pairs okay number three we'll be able to create control group i notify after too many open --> files ah okay now is this is this a minicube specific uh issue okay okay so --> delete everything try again looking better like right like right now it's uh at this point it looks --> like maybe things are just still starting up morning uh hubble starting not sure what --> hubble is like some pods uh-huh okay got you control v i i stopped myself from from doing --> Yeah, it does something down here, does the clipboard down there. --> These labels here, the address by Damon said, reason completed, what is that? --> Any errors? --> Well, failed scheduling. --> Imagine you what, you do a silly inversion, no? --> All right. --> Where are you seeing replica, one replica? --> not seeing that up here somewhere you know it's rough with this so it's cyllium operator --> in 4d units yeah okay controlled by damon set in terms of what's controlling it right so the --> envoy would be a damon set right so it's going to it's going to spin up a pod on on each --> each node right now remind me what the what uh what uh --> So liveness is looking for versus readiness. --> What's the primary difference? --> 105 is the good number. --> If that startup, if that fails, then it's, I mean, the pod makes sense. --> Oh, yeah. --> All right. --> So I want to look at the envoy. --> The number of nodes scheduled. --> like Psyllium Envoy config. --> Uh, I would think Cube system. --> Looking good. --> A while ago, it looks like that they were restarting during the, uh, the install process, --> or the, uh, spin up process. --> Uh, hmm, right. --> Yeah, it's, uh, it's not coming to me. --> Okay. --> Yeah. --> And that would not be there, obviously. --> Right. --> Or, or tested it in a non-production like an iron. --> I was just seeing if it appears to be back. --> It's just doing the same thing I was doing yesterday. --> No. --> It might be. --> Might be that. --> I haven't noticed timing on it. --> Seems that it's been more frequent at times than that, but I could be wrong. --> I mean, it doesn't seem to be really causing any issues. --> So, yeah, it's like the same. --> 16 or that's allocatable um allocated okay here we go four percent well four percent request --> API server no disk pressure false also also false false as sufficient did not sure okay yeah that makes --> so setting the CPU limits to a quarter of a CPU and memory to --> 12 mibytes, I guess. --> Reading looks like. --> I'm guessing that there's not enough of one or the other resource. --> It's a limit on CPU and memory. --> And in all cases here, it looks like. --> We'll see this we have here in events. --> Yeah, Doom killed. --> It looks like the events don't really show the reason here. --> I guess we can tell what's happening, right? --> So delete the cluster. --> Oh, okay, yeah, okay. --> So I'm not sure, just going back to this real quick, --> I'm not sure what we saw here and the logs --> that would indicate what went on here. --> Okay, yeah, I see. --> Okay. --> Also, doesn't seem to be any indication of the CPU limit either, but we'd like to change it to one. --> That's in line. --> I imagine not zero, three. --> Yep. --> Limit to low. --> Yeah, one, one megabyte. --> Okay. --> Right, but it would, but it also went to CLBO. --> So this is staying in container creating, so it's, yeah, I'm not sure why he's doing that with a lower memory limit, but I mean, I guess in either case. --> Now, what if I set mine to six, six megabyte? --> I mean, yeah, all these would be gone, so you're saying. --> I mean, what you're showing on screen is that you set yours to six. --> In a running state now. --> I'm curious as to what would happen if I set the limit to six. --> Looks good. --> Or are you wanting to look for anything in particular besides the just created and started? --> Yeah, I'm just kind of curious to see if that would result in the same, you know, in what you saw. --> Yeah. --> Okay. --> What might it be important to SIP events? Obviously, they disappear, right? --> 16. 12 gig. --> I'm not sure. I mean, well, three. --> Okay. --> It must be less than or equal to the limit. --> Right. Right. Makes sense. --> So just so I'm clear, requests is the, for lack of a better term, limit to what? --> a pod --> can initially --> request and --> limits is --> what it can --> ultimately consume --> is that is that accurate --> and that did not --> yeah --> I would look at this --> pending --> describe the one that's pending --> perhaps --> is that be --> insufficient CPU --> insufficient memory --> was 7 was --> reserved --> and --> went over that --> check the --> node resources --> they did --> we got the --> two deployments that are --> consuming the --> limit --> for about --> 100%. --> Oh okay --> where are you seeing --> that --> there we go --> I need to --> Oh, okay. This is, I'm, I was looking here. This is not what I should be looking at for that. This is just having, this is just showing the, the, the limit, the limits for CPU and memory that we, that we specified in the deployment, right? It's the allocated that we want to look since. Go ahead and delete the deployment. We could do the calculations, but I would say, --> I'll be supposed to 10 CPU with the guy. --> Okay, go ahead and deploy it. --> Yeah, so now we're seven, right. --> Oh, okay, yeah, I'm seeing that, I'm following now. --> So maybe down to five, yeah. --> So it looks like maybe a hair, so insufficient, --> oh, it's sufficient memory now. --> Right, just a tad shy, it sounds like, --> mm-hmm 76 memory uh not so much very nice that's that's uh that's some good information right there --> but you know what it always does that when i delete do minicube delete all i think i've --> noticed that uh no yes yes but um but uh if i remember correctly quests cannot be greater than --> limits, right? Yeah, sounds good.