10 videos 📅 2025-06-26 09:00:00 America/New_York
2:14:39
2025-06-26 09:07:32
1:12:32
2025-06-26 09:11:34
6:42
2025-06-26 11:08:41
35:51
2025-06-26 11:24:37
38:41
2025-06-26 13:21:35
20:37
2025-06-26 15:06:35
51:46
2025-06-27 09:06:19
58:45
2025-06-27 09:06:25
36:01
2025-06-27 11:26:09
1:12:38
2025-06-27 13:45:09

Visit the Kubernetes Comprehensive 2-Day course recordings page

                WEBVTT

00:00:01.140 --> 00:00:31.920
Okay. Working with jobs and crone jobs for scheduled tasks. Jobs are designed for one-off tasks and after completion, and they will then stop. Jobs along with any pods will stay in place with a completed status until deleted. The event describing the completed status within the pod is typically deleted after an hour.

00:00:32.480 --> 00:00:36.420
However, the pods table will still show a completed step.

00:00:36.420 --> 00:00:40.560
You can use jobs to run a single pod or multiple pods in parallel.

00:00:42.400 --> 00:00:47.240
It is good practice to automatically clean up completed jobs.

00:00:48.300 --> 00:00:55.740
An example of running single-task jobs might be to enable metrics and add a metrics user to a database,

00:00:56.540 --> 00:00:59.700
along with the scheduled deletion of the job after 30 minutes.

00:00:59.700 --> 00:01:10.400
this example would be slightly complex and involves secrets with database credentials along with the metrics in point shipping and collection of the metrics

00:01:10.400 --> 00:01:19.660
it could take days or weeks to get everything set up correctly there are a few open source database providers

00:01:19.660 --> 00:01:27.480
with their distribution where they have a great product and the one thing that isn't great is that they don't allow metrics

00:01:27.480 --> 00:01:31.820
and so they want you to ship the metrics to their server in a foreign country.

00:01:32.860 --> 00:01:35.280
And these are mainstream distributions.

00:01:35.680 --> 00:01:39.460
And so when you learn how to use jobs,

00:01:39.620 --> 00:01:43.140
you can go in and turn on those metrics endpoints

00:01:43.140 --> 00:01:48.460
so that you can ship them within your own, you know, using Promptail, for example,

00:01:48.600 --> 00:01:50.420
in Grafana, so note exported.

00:01:51.020 --> 00:01:53.840
So one of the many uses for a job.

00:01:55.520 --> 00:01:57.460
And then in that case,

00:01:57.480 --> 00:02:02.360
the job would be scheduled for deletion after 30 minutes, so it would no longer exist when you did

00:02:02.360 --> 00:02:08.960
your Q-control keyboards. All right. Next, we have Chrome jobs. So Chrome jobs are designed for repeating

00:02:08.960 --> 00:02:17.800
tasks or actions, such as database backups, logometric shipping, report generation, or data's

00:02:17.800 --> 00:02:23.200
prone jobs can also be used to trim SSDs in a cluster of garbage collection purposes. Chrome jobs

00:02:23.200 --> 00:02:27.380
can be used to delete old snapshots where regular snapshoting takes place

00:02:27.380 --> 00:02:32.560
in the cluster using a CSI, which is a container storage interface.

00:02:34.400 --> 00:02:37.360
Chrome jobs use standard Chrome syntax.

00:02:38.440 --> 00:02:42.060
They can also use some of the Chrome macros like at Monthly.

00:02:43.160 --> 00:02:46.200
Chrome jobs create a new pod for each repeated tag.

00:02:46.200 --> 00:02:51.200
You can see here the standard Chrome syntax that you would use when you create your

00:02:51.200 --> 00:02:56.820
Chrome job and some of the macros and what they would look like.

00:02:57.380 --> 00:03:01.720
We're going to create a job.

00:03:03.060 --> 00:03:05.600
And we should have a fresh mini-cube profile.

00:03:06.100 --> 00:03:08.240
So we're going to just spin up mini-cube start.

00:03:09.540 --> 00:03:13.780
And we're going to create a job.gum file.

00:03:14.200 --> 00:03:16.620
Yeah, yeah, just mini-cube start for this one.

00:03:18.840 --> 00:03:21.260
Notice that the ready state of zero of one,

00:03:21.900 --> 00:03:24.060
showing the zero of one pods are running.

00:03:24.060 --> 00:03:27.360
And notice that the status status, yes, completed.

00:03:27.380 --> 00:03:29.840
So now let's check the job.

00:03:29.840 --> 00:03:32.880
It shows that the number of completions now is one of one.

00:03:32.880 --> 00:03:37.880
We're going to check the logs to see if it completed.

00:03:37.880 --> 00:03:39.880
We're looking to see if it was an output.

00:03:39.880 --> 00:03:41.880
All right, describe the job.

00:03:41.880 --> 00:03:42.880
Okay, see it's completed.

00:03:42.880 --> 00:03:44.880
Now, Q control, get pod.

00:03:44.880 --> 00:03:46.880
All right, so, and if you were to get pods minus A,

00:03:46.880 --> 00:03:48.880
do the same thing with minus A.

00:03:48.880 --> 00:03:51.880
And that is going to hang around forever.

00:03:51.880 --> 00:03:56.880
Right, so eventually this will clutter up your pod list.

00:03:56.880 --> 00:04:00.880
All right, so now we're going to delete the job.

00:04:00.880 --> 00:04:02.880
All right.

00:04:02.880 --> 00:04:04.880
You can check to see if it's gone.

00:04:04.880 --> 00:04:05.880
It should be.

00:04:05.880 --> 00:04:06.880
Oh, you did.

00:04:06.880 --> 00:04:07.880
Okay, there we go.

00:04:07.880 --> 00:04:09.880
So now we're going to look at,

00:04:09.880 --> 00:04:13.880
we're going to create a job auto delete YAML

00:04:13.880 --> 00:04:24.030
seconds after finished.

00:04:24.030 --> 00:04:25.030
Yeah, we could have it.

00:04:25.030 --> 00:04:30.030
This is good for muscle memory on YAML formatting.

00:04:30.030 --> 00:04:31.030
So this will save you out.

00:04:31.030 --> 00:04:33.970
This will save you hours down the road.

00:04:33.970 --> 00:04:36.750
Spending a few minutes now in the course, we'll save you.

00:04:36.750 --> 00:04:38.330
It is tedious, though, after a long day

00:04:38.330 --> 00:04:41.030
and you've been typing and you start miss typing,

00:04:41.030 --> 00:04:43.030
and that's when the mistakes happen with the yammy.

00:04:43.030 --> 00:04:44.590
You're at the end of a long day,

00:04:44.590 --> 00:04:46.050
and you're trying to fix something,

00:04:46.050 --> 00:04:47.650
and you just can't find it.

00:04:47.650 --> 00:04:51.710
Okay, so now, check the job status,

00:04:51.710 --> 00:04:53.050
and we'll wait 34 seconds.

00:04:53.050 --> 00:04:54.050
Control good job.

00:04:54.050 --> 00:04:56.310
Yeah, it's gotta wait 30 seconds after it finishes

00:04:56.310 --> 00:04:57.970
because it's still creating a container.

00:04:57.970 --> 00:05:00.010
I forgot your network is slow.

00:05:00.010 --> 00:05:04.510
So I think they have a slow network in their data center.

00:05:04.510 --> 00:05:08.510
So I forget which data center that Neil chose,

00:05:08.510 --> 00:05:10.050
but they have a slow connection.

00:05:10.050 --> 00:05:12.650
All right, so you can see no resources found.

00:05:12.650 --> 00:05:14.510
You can check the pods.

00:05:14.510 --> 00:05:16.450
Yeah, mm-hmm.

00:05:16.450 --> 00:05:20.250
Now, if you auto-delete a job and you would like a record

00:05:20.250 --> 00:05:22.530
that has been run successfully,

00:05:22.530 --> 00:05:24.910
what process would be good to implement in the class?

00:05:24.910 --> 00:05:27.070
If you see that, if you notice,

00:05:27.070 --> 00:05:29.610
there's no record that that has run now.

00:05:29.610 --> 00:05:32.290
So you can ship the events and logs,

00:05:32.290 --> 00:05:34.490
and then you can just look it up in there and say,

00:05:34.490 --> 00:05:37.330
yep, there's my record that it ran.

00:05:37.330 --> 00:05:39.170
So it shifts it right out.

00:05:39.170 --> 00:05:41.290
So even after it deletes itself, you still have it.

00:05:41.290 --> 00:05:43.190
All right, create a chrome job.

00:05:43.190 --> 00:05:44.510
This was a little bit different.

00:05:44.510 --> 00:05:49.950
So try to copy it over, but you might make a few mistakes.

00:05:49.950 --> 00:05:50.890
So it's up to you.

00:05:50.890 --> 00:05:53.630
Now, I have a stuck B key because it's a MacBook,

00:05:53.630 --> 00:05:59.180
so I get double B, triple B or no B.

00:05:59.180 --> 00:06:02.620
It might still be pulling, if you get the pods,

00:06:02.620 --> 00:06:04.620
it might still be pulling that image in.

00:06:05.820 --> 00:06:08.140
All right, so now let's go back and look at the pods.

00:06:08.140 --> 00:06:10.300
Okay, so we can see one there.

00:06:10.300 --> 00:06:11.500
And now let's wait.

00:06:11.500 --> 00:06:14.380
Notice the, I think we have to make wait a minute

00:06:14.380 --> 00:06:16.540
because we have to set to run every minute.

00:06:16.540 --> 00:06:18.540
So it's age 14 seconds.

00:06:18.540 --> 00:06:19.900
So we'll make wait a minute,

00:06:19.900 --> 00:06:21.820
and then we're gonna do pods again,

00:06:21.820 --> 00:06:24.860
but we'll note the last five

00:06:24.860 --> 00:06:27.820
alpha-numeric characters of the hash before we do that.

00:06:29.180 --> 00:06:34.300
Mm-hmm. And then we can also describe it again and that will show us that it created a new one.

00:06:34.300 --> 00:06:44.140
And we'll get up to three there. I think so. Yeah, I think it keeps up to three. It'll bounce between two and four.

00:06:44.140 --> 00:06:52.540
If we describe the crone job again. Now we have saw completed job, created job, saw completed job.

00:06:53.980 --> 00:06:59.100
And get pods again. Should have maybe a third one in there. First and second are

00:06:59.180 --> 00:07:05.580
go in there. No, I guess we're not quite at, um, at 103 seconds. You need to be at a hundred and

00:07:05.580 --> 00:07:12.220
probably 25 seconds. All right, now we have three now. The next one that gets created. Watch and

00:07:12.220 --> 00:07:17.820
see how that works. Not bad. Two minutes, five seconds, 65 seconds, five seconds. On the nose

00:07:17.820 --> 00:07:24.460
every 60 seconds. Perfect time. Oh, well, we're going to wait for three minutes. So let's

00:07:24.460 --> 00:07:29.020
run for two minutes, 46 seconds. We've got another 15 seconds or so. Yeah.

00:07:29.180 --> 00:07:59.160
Yeah, yeah. So it will automatically, okay, see we lost the Q5 BP2. So it will automatically make sure that no more than three are shown. So it garbage collects. And then you can go back and describe. There you go. And you can see successful job history limit three. If you look up at the top, it shows you the three most successful. And you could probably change that to one. I've never.

00:07:59.180 --> 00:08:03.500
messed with it but you could change it to one if you only wanted one to show in your

00:08:03.500 --> 00:08:08.700
cute control. All right any questions about that? Very very useful. I actually use Chrome

00:08:08.700 --> 00:08:16.780
Jobs on my CSI so my container storage interface and it allows me to garbage collect snapshots

00:08:17.900 --> 00:08:25.100
and trim the SSDs because I use encryption of my storage and for my clients and so it's all

00:08:25.100 --> 00:08:29.160
automated. It just does everything. Otherwise your your SSDs will

00:08:29.180 --> 00:08:36.780
fill up within about you know depending on what capacity you have it'll fill up with any more

00:08:36.780 --> 00:08:40.780
from a few weeks to a few months and then you'll run out of storage capacity even though you have

00:08:41.580 --> 00:08:46.220
all of your storage still available so you have to run garbage collect and delete your

00:08:46.220 --> 00:08:51.660
encrypted you do it encrypted trim all right so now we're going to delete the chrome job

00:08:51.660 --> 00:09:03.360
all right so let's review in lesson four we learned how to automate tasks with jobs and chrome jobs

00:09:03.360 --> 00:09:10.160
we learned when a job is appropriate and also when a chrome job would be applicable to a repeating

00:09:10.160 --> 00:09:18.460
task we auto-deleted a job and learned what happens to the logs and events describing that job

00:09:18.460 --> 00:09:23.700
having been completed and how to mitigate that loss of logs and events.

00:09:23.780 --> 00:09:31.600
We learned how Chrome jobs create new job pods and delete old job pods automatically.

00:09:32.860 --> 00:09:36.300
All right, any questions about jobs and Chrome job?

00:09:36.300 --> 00:09:39.980
All right, go on to DNS.

00:09:43.780 --> 00:09:46.540
In Kubernetes, yes, for sure.

00:09:46.720 --> 00:09:48.920
Oh, let's see.

00:09:49.140 --> 00:09:51.480
Yeah, you might as well just go ahead and create a fresh job.

00:10:06.480 --> 00:10:07.180
You know what?

00:10:07.240 --> 00:10:10.080
Yeah, I'm not sure. Let me see here.

00:10:14.180 --> 00:10:17.880
No, we'll wait because we're going to do a three-node cluster looks like.

00:10:18.300 --> 00:10:21.100
All right. Services.

00:10:22.940 --> 00:10:32.360
Services provide a method for exposing network applications that are running in one or more pods within the K-8s cluster.

00:10:32.360 --> 00:10:37.860
Each pod receives its own IP address within a node.

00:10:38.400 --> 00:10:43.080
When that pod is recreated, it then generates a new IP address.

00:10:44.140 --> 00:10:49.680
Constantly changing pod IP addresses present a problem with pod connectivity.

00:10:51.260 --> 00:10:57.780
Services enable a pod or a set of pods to be discovered within a Kubernetes network

00:10:57.780 --> 00:11:01.920
without the need to attach an ingress to each IP address individually.

00:11:02.360 --> 00:11:08.800
The service API within Kubernetes enables you to expose a group of pods over a network.

00:11:08.800 --> 00:11:15.840
This service object uses endpoints in a service policy to enable network access to the

00:11:15.840 --> 00:11:19.880
pod or group of similar pods.

00:11:19.880 --> 00:11:31.780
Service objects can be created, viewed, or deleted using the Kubernetes API.

00:11:31.780 --> 00:11:39.480
objects use a selector for scanning pods within the cluster for a specific label. So remember

00:11:39.480 --> 00:11:46.860
when we use labels before with our Damon set. This label needs to be unique for the pod or

00:11:46.860 --> 00:11:54.540
set of pods. Pod labels usually follow a naming convention such as app. Kubernetes that I.O. Name or

00:11:54.540 --> 00:12:01.760
something similar. Using the service object selector with app Kubernetes I.O. name, the service

00:12:01.780 --> 00:12:05.620
will then continuously scan for pods that match that unique select.

00:12:05.620 --> 00:12:10.900
Service objects contain port definitions under the spec port section,

00:12:10.900 --> 00:12:16.740
which contain the name, protocol, port, and target port.

00:12:16.740 --> 00:12:28.500
So normal conventions for name is HTTP or HTTP for a service configuration

00:12:28.500 --> 00:12:36.580
involving port 80 or port 443 and that would enable you to run both within the service.

00:12:38.100 --> 00:12:46.500
For protocol, the default is TCP. Port is a service port the client will connect to once

00:12:46.500 --> 00:12:52.740
the service is exposed. So that's the port on the service that you connect to. Target port is a

00:12:52.740 --> 00:12:58.180
container port within the pod that the service will connect to. Target port is a

00:12:58.180 --> 00:13:03.620
port can be a number or name depending on how the container port is defined within the

00:13:03.620 --> 00:13:12.020
pod definition there are three primary service types that are encountered cluster IP

00:13:12.020 --> 00:13:18.500
node port and load balance cluster IP is the default service type cluster IP also serves as a

00:13:18.500 --> 00:13:24.980
foundation that other service types build on no type definition is required if

00:13:24.980 --> 00:13:31.540
utilizing the default configuration is that will automatically assign an IP address for the service

00:13:31.540 --> 00:13:38.020
object. If you define the service object with spec cluster IP set to none, then Kubernetes

00:13:38.020 --> 00:13:46.340
will not assign an IP address. This is a headless service, which uses DNS instead of a specific

00:13:46.340 --> 00:13:53.620
cluster IP address to communicate with a specific pod. So you're telling it don't assign a cluster

00:13:53.620 --> 00:14:03.700
IP. We're going to use DNS. All right, node boards. When a service type is set to node board,

00:14:04.740 --> 00:14:10.580
the Kubernetes control plane will allocate a port from a range specified by service node port

00:14:10.580 --> 00:14:22.250
range flag. The default node port range is 30,000 to 32,000. When using a multi-node cluster with a

00:14:22.250 --> 00:14:25.450
node port service, each node will proxy that same port.

00:14:25.610 --> 00:14:30.610
number on each node into the node port service.

00:14:30.610 --> 00:14:35.770
Node port enables you to set up your own external load balancing solution outside of the

00:14:35.770 --> 00:14:36.610
Kubernetes.

00:14:36.610 --> 00:14:41.050
Node ports also enable the ability to expose a particular node's IP address directly.

00:14:41.050 --> 00:14:49.730
A particular node port may be chosen using the optional spec port's node port value.

00:14:49.730 --> 00:14:55.590
However, this requires that you also reserve the node port.

00:14:55.610 --> 00:14:58.370
port range to avoid port collisions.

00:14:58.370 --> 00:15:02.730
So it requires extra work if you want a specific cloud or bare metal load balancers

00:15:02.730 --> 00:15:08.090
automatically determine which pod to send traffic to through the load balancing

00:15:08.090 --> 00:15:15.050
service on bare metal when selecting a service type of load balancer and external load

00:15:15.050 --> 00:15:19.970
balancer must be set up first with a pool IP address range.

00:15:19.970 --> 00:15:25.590
This enables a load balancer to assign IP addresses to the service object with a type of load

00:15:25.610 --> 00:15:27.050
Balancer.

00:15:27.050 --> 00:15:31.170
Popular bare metal providers are Kubevip and Metal El.

00:15:31.170 --> 00:15:38.930
Bare metal load balancers watch for service type load balancer and automatically assign an IP address along with the load.

00:15:38.930 --> 00:15:46.650
MiniCube uses bare metal provider Kubev for its control plane load balancer when used in HA mode.

00:15:46.650 --> 00:15:55.570
However, Minicube then uses a specific tunnel protocol for assigning load balancer IP addresses.

00:15:55.610 --> 00:16:01.450
or service types load balancer due to operating within the Docker network.

00:16:02.350 --> 00:16:06.730
So this is different than the KubeBit load balancer IP for the control plane.

00:16:07.730 --> 00:16:14.230
So KubeBit provides a load balancer IP for the control plane and high availability,

00:16:15.270 --> 00:16:19.810
but then you need to also provide a specific load balancer item.

00:16:19.810 --> 00:16:24.710
So to set up an actual load balance, we can also use a mini-cube add-on for metal LV.

00:16:24.710 --> 00:16:30.070
We will need to provide a pool IP address range to the metal LB load balancer when setting that up.

00:16:30.950 --> 00:16:37.670
On a cloud provider, the cloud provider manager will interact with an external load balancer API

00:16:37.670 --> 00:16:42.630
to provision a load that provide the IP address to the load balancer service.

00:16:44.310 --> 00:16:47.990
You will then populate the external IP field.

00:16:48.630 --> 00:16:53.830
A headless service is used to interface with other service discovery mechanisms such as DNS.

00:16:54.710 --> 00:17:02.950
A cluster IP address is not allocated and coup proxy does not interact with the service.

00:17:02.950 --> 00:17:08.470
To create a headless service, set the spec cluster IP value to none.

00:17:08.470 --> 00:17:15.110
Headless services report the endpoint address of individual pods using internal DNS records

00:17:15.110 --> 00:17:18.710
served through the cluster's internal DNS service.

00:17:18.710 --> 00:17:23.670
The service services are useful for database access where there is a primary and multiple secondary

00:17:23.670 --> 00:17:32.550
nodes. Headless enables specific database pod queries or writes using a service name address,

00:17:32.550 --> 00:17:39.270
regardless of the pod restarts and generates a new IP address. Another way of looking at

00:17:39.270 --> 00:17:45.350
headless services is that it provides a sticky connection to individual pods, regardless

00:17:45.350 --> 00:17:51.990
of the services are typically used with state workloads can use DNS within the cluster to

00:17:51.990 --> 00:17:53.030
to discover services.

00:17:53.030 --> 00:17:57.670
Kubernetes creates DNS records for services and pods.

00:17:57.670 --> 00:18:03.670
Publet running on each node handles DNS configuration for each pod.

00:18:03.670 --> 00:18:10.950
DNS enables running containers to look up services by name rather than IP address.

00:18:11.910 --> 00:18:16.470
Namespaces are important when using DNS queries.

00:18:16.470 --> 00:18:21.830
Imagine if you had two stapleset databases running in two separate namespaces.

00:18:21.990 --> 00:18:25.990
The DNS service string might be similar except for the namespace.

00:18:25.990 --> 00:18:31.990
Prairie strings contain a service name and namespace.

00:18:31.990 --> 00:18:34.990
It's under the format,

00:18:34.990 --> 00:18:36.990
servicename.

00:18:36.990 --> 00:18:37.990
.Namespace.

00:18:37.990 --> 00:18:38.990
.service.

00:18:38.990 --> 00:18:42.990
All right, we're going to spin up a three-node cluster.

00:18:42.990 --> 00:18:46.990
I don't remember if these are this slow spinning up the other day when we tested it or not.

00:18:46.990 --> 00:18:49.990
All right, using the prior EngineX deployment.

00:18:49.990 --> 00:18:50.990
Deploy an EngineX replica.

00:18:50.990 --> 00:18:51.990
Next replica set.

00:18:51.990 --> 00:18:52.990
All right.

00:18:52.990 --> 00:18:53.990
Cluster IP service.

00:18:53.990 --> 00:18:55.990
One is the type noted.

00:18:55.990 --> 00:18:56.990
No.

00:18:56.990 --> 00:18:59.990
And what is the, yeah, that's correct.

00:18:59.990 --> 00:19:02.990
And what is the port for the service?

00:19:02.990 --> 00:19:04.990
And what is the external IP?

00:19:04.990 --> 00:19:07.990
Yep, you can try it again and see if it's not.

00:19:07.990 --> 00:19:09.990
I'm going to describe the service.

00:19:09.990 --> 00:19:11.990
All right, what is the IP address assigned?

00:19:11.990 --> 00:19:19.990
Okay, so we're going to open a browser and navigate to the IP address using HTTP.

00:19:19.990 --> 00:19:24.990
So not HTTPS or HTTP with the port shown.

00:19:24.990 --> 00:19:28.990
So there should be a browser on your desktop.

00:19:28.990 --> 00:19:31.990
I have to search for it.

00:19:31.990 --> 00:19:34.990
Yeah, there you go Firefox, huh.

00:19:34.990 --> 00:19:35.990
That's interesting.

00:19:35.990 --> 00:19:38.990
It was on the desktop online, but it's not on the desktop on you.

00:19:38.990 --> 00:19:42.990
And then what you can do is you can shorten the height of the terminal,

00:19:42.990 --> 00:19:44.990
so move the terminal to the top of the screen,

00:19:44.990 --> 00:19:48.990
and then just shorten the height to maybe half the height of the terminal.

00:19:48.990 --> 00:19:54.590
terminal keep the width the same and then put your browser in the bottom part maybe

00:19:54.590 --> 00:20:00.270
and then make the browser just like a third of the screen high it's tricky trying to do all

00:20:00.270 --> 00:20:06.510
this in one small screen but we actually don't need that much for the browser showing you can

00:20:06.510 --> 00:20:10.830
actually shorten it down a little bit more and make the terminal a little deep and then we need

00:20:10.830 --> 00:20:17.630
the port you may have to open a new tab and get rid of that tab that's a start page tab let's open a new

00:20:17.630 --> 00:20:23.910
tab and then delete that tab and try it. All right, if it's going to time out, it'll time out here soon.

00:20:24.070 --> 00:20:32.330
It uses about a minute. Okay, so what happened? Well, we need an external IP address to access

00:20:32.330 --> 00:20:38.370
the service from the browser. So the cluster IP address is an internal IP address that is not

00:20:38.370 --> 00:20:47.070
directly except we're going to delete the cluster IP service. Okay, so cluster IP services have, you know,

00:20:47.070 --> 00:20:52.070
They have their usefulness when you're communicating pod to pod.

00:20:52.070 --> 00:20:58.070
And in this case, we need to be able to access within the cluster itself from outside.

00:20:58.070 --> 00:21:01.070
So I'm going to create a node port service.

00:21:01.070 --> 00:21:02.070
There you go.

00:21:02.070 --> 00:21:03.070
Looks good to me.

00:21:03.070 --> 00:21:07.070
Yeah, so with a node port, the control plane said, hey, here you go.

00:21:07.070 --> 00:21:08.070
Here's 30,000.

00:21:08.070 --> 00:21:12.070
It's hard for me to read, but 30,000, 784.

00:21:12.070 --> 00:21:13.070
This is your node port.

00:21:13.070 --> 00:21:16.070
And if you had multiple node ports, they'd all be on the same thing.

00:21:16.070 --> 00:21:44.710
same thing. Okay, so now we're going to get the URL for the node ports. What happened? Go ahead and test that in the browser. Yeah, go ahead and test that in the browser. That stream with HTTP. Yeah, the cut and paste in these BNC terminals sometimes leaves the little B design. Why is it doing? Why is it doing that? Oh, because it's grabbing your name. See how your name is following your cursor?

00:21:46.070 --> 00:21:49.510
So you actually copied your name from the what's following your cursor.

00:21:49.510 --> 00:21:53.670
All right, there we go. That's what we were looking for. So unable to connect, right?

00:21:53.670 --> 00:21:59.270
Yeah, why is it? It says exiting due to service unreachable, service not available.

00:22:00.230 --> 00:22:05.910
No running pod for service engine X service found. Well, let's

00:22:07.270 --> 00:22:13.510
All right, so let's check the node for it, sir. What are we doing here? So let's look at this carefully.

00:22:13.510 --> 00:22:17.710
Let's look at a few things.

00:22:18.410 --> 00:22:19.330
Let's look at the labels.

00:22:19.490 --> 00:22:20.170
Do we have any labels?

00:22:20.210 --> 00:22:21.130
Do we have any selectors?

00:22:21.250 --> 00:22:21.470
Okay.

00:22:21.930 --> 00:22:23.050
Do we have any endpoints?

00:22:23.290 --> 00:22:23.770
Mm-hmm.

00:22:24.190 --> 00:22:24.930
All right.

00:22:25.050 --> 00:22:26.610
Now let's check the Inzex deployment.

00:22:26.770 --> 00:22:27.130
All right.

00:22:27.250 --> 00:22:28.510
Let's scroll up a little bit.

00:22:28.610 --> 00:22:28.870
All right.

00:22:28.970 --> 00:22:30.050
What is our label?

00:22:30.050 --> 00:22:30.350
Okay.

00:22:31.170 --> 00:22:37.790
And scroll up to the node port service and what is the select?

00:22:37.890 --> 00:22:38.150
Uh-huh.

00:22:40.070 --> 00:22:41.010
That's correct.

00:22:41.110 --> 00:22:42.970
I would use the full because the app.

00:22:42.970 --> 00:22:46.970
That Kubernetes that IO is the proper way to do things.

00:22:46.970 --> 00:22:47.970
Yep.

00:22:47.970 --> 00:22:48.970
All right.

00:22:48.970 --> 00:22:50.970
So we're going to fix it.

00:22:50.970 --> 00:22:52.970
Copy enginex deployment.

00:22:52.970 --> 00:22:58.970
YAML, well, first let's, you can go ahead and delete the current deployment where it gets confusing.

00:22:58.970 --> 00:22:59.970
I'll go a little easier on you.

00:22:59.970 --> 00:23:01.970
Okay, so we're going to copy EngineX deployment.

00:23:01.970 --> 00:23:03.970
YAML to IngenX app.

00:23:03.970 --> 00:23:04.970
Go ahead and X.

00:23:04.970 --> 00:23:05.970
Oh, okay.

00:23:05.970 --> 00:23:06.970
All right.

00:23:06.970 --> 00:23:08.970
Three labels to reflect the correct.

00:23:08.970 --> 00:23:09.970
And they look good.

00:23:09.970 --> 00:23:11.970
I don't see any mousseks and so.

00:23:11.970 --> 00:23:12.970
So good.

00:23:12.970 --> 00:23:13.970
Okay.

00:23:13.970 --> 00:23:14.970
Match string string.

00:23:14.970 --> 00:23:15.970
App.

00:23:15.970 --> 00:23:16.970
Kubernetes, I have a name.

00:23:16.970 --> 00:23:18.970
Oh, do we have a space in there or something?

00:23:18.970 --> 00:23:19.970
Something's not right.

00:23:19.970 --> 00:23:23.970
Select is match template labels app.

00:23:23.970 --> 00:23:24.970
.

00:23:24.970 --> 00:23:31.970
K-U-B-E-O-N-T-E-E-T-E-S- that I-O-N-A-M-G-I-E-N-G-I-N-X.

00:23:31.970 --> 00:23:34.970
Okay, something.

00:23:34.970 --> 00:23:36.970
Template metadata label.

00:23:36.970 --> 00:23:37.970
Let me look at this.

00:23:37.970 --> 00:23:39.970
Must be a mistake in there somewhere.

00:23:39.970 --> 00:23:40.970
Oh, did you find it?

00:23:40.970 --> 00:23:46.830
would you find it? Okay, there we go. And see, that's where, you know, being able to type in your

00:23:46.830 --> 00:23:53.470
own animal and heat it is important. Because those mistakes, you will spend some kind of

00:23:53.470 --> 00:23:59.250
and efficient. All right, we're going to check the service, describe the node port service.

00:23:59.390 --> 00:24:05.760
Notice anything different. Well, the selector, we haven't changed in our service. So that's correct

00:24:05.760 --> 00:24:12.500
because we didn't label our service. We labeled our deployment, but we didn't label our

00:24:12.540 --> 00:24:17.740
service. Our service is using this electric to find the deployment and attach.

00:24:19.180 --> 00:24:27.860
Yeah, yeah, we have how many? Yeah, three to port 80, right? So it has now found the pods and

00:24:27.860 --> 00:24:32.380
grabbed their IP. Now we're going to get the URL for the node port server. All right. Now we're

00:24:32.380 --> 00:24:40.940
going to test it in the browser. It's wild. What do you know? He just created a service and now we can

00:24:40.940 --> 00:24:44.120
connect. All right. Kind of cool.

00:24:44.140 --> 00:24:45.140
Cool, huh?

00:24:45.140 --> 00:24:47.140
And you figure out how that works the first time, right?

00:24:47.140 --> 00:24:48.140
All right.

00:24:48.140 --> 00:24:50.140
Now we can all that work and that's what we're going to do.

00:24:50.140 --> 00:24:51.140
We need it?

00:24:51.140 --> 00:24:52.140
Yeah.

00:24:52.140 --> 00:24:56.140
No, so when you do, yeah, so when you do service,

00:24:56.140 --> 00:25:02.140
internet service URL, it's using the IP address of the node.

00:25:02.140 --> 00:25:07.140
So the node IP is 192-16849.2.

00:25:07.140 --> 00:25:10.140
And that's what MiniCube just assigns to Node 1.

00:25:10.140 --> 00:25:12.140
That automatically assigns that.

00:25:12.140 --> 00:25:17.260
Because it's that way on mine as well, I'm on a separate bare metal machine.

00:25:18.160 --> 00:25:22.240
So what you're doing is you're using the node IP address.

00:25:22.520 --> 00:25:27.600
And it does assign a cluster IP, so you can still edit from another deployment.

00:25:28.300 --> 00:25:31.440
So the deployment could hit internally the cluster IP.

00:25:31.720 --> 00:25:33.800
You just can't access that cluster IP external.

00:25:33.840 --> 00:25:40.880
So you could run EngineX deployment number two and say, okay, I just want that to query

00:25:40.880 --> 00:25:47.020
engine X deployment number one and you're going to query it through the service with the cluster

00:25:47.020 --> 00:25:50.960
IP and you can't do that. You just can't do it externally. Kind of cool though when you see that

00:25:50.960 --> 00:25:58.800
finally take place and you get your first service into the cluster. So we'll delete that

00:25:58.800 --> 00:26:05.460
and then we're going to have some more fun. All right we're going to create a load balancer service

00:26:05.460 --> 00:26:10.820
and so your deployment is unaffected. It's still running just fine, right? So it's doing

00:26:10.880 --> 00:26:14.680
what it's supposed to do in the cluster, we just don't have a service to attach to it.

00:26:14.680 --> 00:26:20.280
And then as soon as you get that typed out, then go ahead and deploy it.

00:26:20.280 --> 00:26:21.880
I'll be right back.

00:26:21.880 --> 00:26:23.880
So we're not using EngineExployment.

00:26:23.880 --> 00:26:24.880
Ymels.

00:26:24.880 --> 00:26:29.670
We're using EngineX dash app, I think, about Yamels.

00:26:29.670 --> 00:26:32.670
Yeah, yeah, good idea, though.

00:26:32.670 --> 00:26:38.670
It gets you're onto my ruse on how I teach you certain things.

00:26:38.670 --> 00:26:42.010
So I'll have to get more creative.

00:26:42.010 --> 00:26:44.770
And the other thing, what's that?

00:26:44.770 --> 00:26:46.770
Yeah, pending.

00:26:46.770 --> 00:26:50.790
Yeah, yeah, another set.

00:26:50.790 --> 00:26:53.310
So we check the service, we can see that it's pending.

00:26:53.310 --> 00:26:57.890
We still have a node port in 8080, right?

00:26:57.890 --> 00:27:00.790
And we're going to describe the load balancer service now.

00:27:00.790 --> 00:27:05.310
All right, so you can see we have endpoints, that's a good sign, right?

00:27:05.310 --> 00:27:06.310
What does endpoints mean?

00:27:06.310 --> 00:27:12.310
It means that your services is, has found your back end pods, right?

00:27:12.310 --> 00:27:15.190
found the IP address, so we know that connection's good.

00:27:15.190 --> 00:27:18.230
So when you troubleshoot, if you have endpoints

00:27:18.230 --> 00:27:20.950
and they match the number of pods you're expecting,

00:27:20.950 --> 00:27:23.230
then you know that that connection,

00:27:23.230 --> 00:27:25.390
that connectivity that is working.

00:27:25.390 --> 00:27:30.010
Okay, now, but we see pending, right?

00:27:30.010 --> 00:27:32.390
Check the Git service again.

00:27:32.390 --> 00:27:33.710
Let's see if it gave us an IP.

00:27:33.710 --> 00:27:34.630
Nope.

00:27:34.630 --> 00:27:38.710
Yep, all right, open a second tab in the terminal.

00:27:39.830 --> 00:27:41.790
There's gotta be a way to do that on that terminal.

00:27:41.790 --> 00:27:42.150
There you go.

00:27:42.150 --> 00:28:12.130
Open tab. Oh, create. So open tab, the next one, and then they took the little tab button away on that one. Interesting. Okay. Yep. Okay. So now we're going to create a tunnel. Remember we talked about before how MiniCube uses a tunnel to assign an IP address, an external. So we're going to run MiniCube tunnel. Of course, this differs from a real production cluster reload. Simulates a load balancer. And there we go. It says no errors. And it's telling us.

00:28:12.150 --> 00:28:20.310
the route and it's taken 960 to 192 16849.2 which is our node IP address. Awesome.

00:28:20.310 --> 00:28:25.950
Switch back to the other tab and get the load balancers. So yeah we have two tabs. Click

00:28:25.950 --> 00:28:30.510
the first one. Click the first one over here on the left at the top right below file.

00:28:30.510 --> 00:28:36.550
There you go. There you go. All right. Now let's get the load balancer service again.

00:28:36.550 --> 00:28:41.990
What change do? So the mini cube tunnel created a network route and the load

00:28:41.990 --> 00:28:48.070
balancer service was issued an IP address now it's a docker magic it's it's

00:28:48.070 --> 00:28:54.870
doctor magic that you'll never run into in a row cluster it's uh i i hate even using it but you know

00:28:57.030 --> 00:29:07.010
it's many games way all right so now in yeah let's uh external IP is populated in the browser

00:29:07.010 --> 00:29:14.690
navigate to http external IP 88 right all right is that the uh correct i

00:29:14.770 --> 00:29:18.550
I think it was, so 10107 is the correct one, right?

00:29:18.650 --> 00:29:19.630
Yep, yep, all right.

00:29:19.730 --> 00:29:20.350
So there we go.

00:29:21.670 --> 00:29:30.690
So we, yeah, so we assigned an external IP, Minicube, get this little fake load balancer thing.

00:29:31.450 --> 00:29:38.250
We're at it simulated Coupid cloud provider, where a cloud provider will provide an IP.

00:29:40.390 --> 00:29:42.950
And our load balancer worked.

00:29:43.590 --> 00:29:43.790
All right.

00:29:43.790 --> 00:29:47.070
So we have two ways we've learned, node port and load balancer.

00:29:47.710 --> 00:29:53.630
So we're going to clean up the mini cube tunnel in the second tab with Control C and then delete just that tab.

00:29:54.590 --> 00:30:03.070
So the second tab. Yeah, there you go. And then now get your service again. Okay, and now do mini cube tunnel cleanup.

00:30:03.790 --> 00:30:10.830
I forgot to do that before. Control C and do it again. MiniCube tunnel cleanup. Okay, control C.

00:30:10.830 --> 00:30:12.950
that's weird

00:30:12.950 --> 00:30:14.450
okay so we created

00:30:14.450 --> 00:30:16.330
a load balancer with

00:30:16.330 --> 00:30:18.490
tunnel

00:30:18.490 --> 00:30:20.010
tunnel

00:30:20.010 --> 00:30:21.690
so using the prior

00:30:21.690 --> 00:30:24.090
engine X app

00:30:24.090 --> 00:30:26.070
and load balancer service

00:30:26.070 --> 00:30:27.950
so both are still installed let's check

00:30:27.950 --> 00:30:29.270
again our load balancer

00:30:29.270 --> 00:30:31.130
and our engine X

00:30:31.130 --> 00:30:31.710
that we can just

00:30:31.710 --> 00:30:33.890
yeah get

00:30:33.890 --> 00:30:34.770
deployment

00:30:34.770 --> 00:30:36.330
service

00:30:36.330 --> 00:30:39.250
and make sure those are both

00:30:39.250 --> 00:30:39.650
running

00:30:40.830 --> 00:30:51.710
and it gets a service and pending right yeah all right so now we're going to do this with

00:30:51.710 --> 00:30:57.230
metal lb let's see if this works on years this works online yesterday so we're going to install metal

00:30:57.230 --> 00:31:04.350
lb there are several ways to do it this seems to work so mini cube add-ons enable metal lb

00:31:04.350 --> 00:31:10.430
all right cube control give pods in all namespaces so minus a yep you got a speaker

00:31:10.430 --> 00:31:12.270
and we've got a controller.

00:31:13.070 --> 00:31:13.590
All right.

00:31:13.670 --> 00:31:15.850
And we're going to get the minicube IP address subnet.

00:31:15.970 --> 00:31:18.770
And, of course, it's 19216, 49.2.

00:31:18.770 --> 00:31:21.470
So now what we have to do is we have to feed.

00:31:21.590 --> 00:31:26.750
Remember, we have to give it a pool, a load balancer IP range in the same segment.

00:31:28.130 --> 00:31:32.870
And so what we're going to do is we're going to take that, the first three octets, 192, 16842.

00:31:33.790 --> 00:31:38.450
And then instead of dot 2, we're going to do dot 50 for the first one and dot 60 for the second one.

00:31:38.450 --> 00:31:41.810
So we're going to give it a pool from 50 to 60.

00:31:42.610 --> 00:31:48.010
And we're just doing that because we're taking a wild guess that there won't be any collisions for the next 10th.

00:31:48.310 --> 00:31:53.810
Probably do this in production because you'll probably get in this will be good.

00:31:54.550 --> 00:31:54.990
All right.

00:31:55.090 --> 00:31:57.170
So it says everything looks good.

00:31:58.650 --> 00:32:00.630
And now, well, we're going to get this.

00:32:00.970 --> 00:32:03.490
We're going to get the service and see what do we have now.

00:32:04.410 --> 00:32:06.030
Yeah, kind of cool, huh?

00:32:06.550 --> 00:32:07.030
All right.

00:32:07.230 --> 00:32:07.630
All right.

00:32:07.690 --> 00:32:08.910
This is our config map.

00:32:08.910 --> 00:32:10.930
And you can see the load back.

00:32:10.950 --> 00:32:14.950
or address pool IP range that we created right now.

00:32:14.950 --> 00:32:15.950
Okay.

00:32:15.950 --> 00:32:18.950
Now we're going to test the load balancer in the browser.

00:32:18.950 --> 00:32:23.950
Navigate to the external IP with service port in the browser.

00:32:23.950 --> 00:32:24.950
8080.

00:32:24.950 --> 00:32:27.950
Yeah, so we did it with MetaLB instead of tunnel.

00:32:27.950 --> 00:32:29.950
We skipped the tunnel.

00:32:29.950 --> 00:32:32.950
Now we're going to check the Engine X deployment by our label.

00:32:32.950 --> 00:32:34.950
We want to look for everything.

00:32:34.950 --> 00:32:35.950
It has a label.

00:32:35.950 --> 00:32:37.950
App.Cuburnetes.I.O.

00:32:37.950 --> 00:32:38.950
Name equals engine.

00:32:38.950 --> 00:32:39.950
There you go.

00:32:39.950 --> 00:32:40.950
We can see our podcast.

00:32:40.950 --> 00:32:42.950
You can say a deployment.

00:32:42.950 --> 00:32:44.950
I'm going to say a replica set.

00:32:44.950 --> 00:32:46.950
Now, the service doesn't have a label,

00:32:46.950 --> 00:32:49.950
but if you would have added a label to the service,

00:32:49.950 --> 00:32:51.950
because remember the labels are none,

00:32:51.950 --> 00:32:53.950
then it would show up as well.

00:32:53.950 --> 00:32:55.950
So it makes it nice for querying.

00:32:55.950 --> 00:32:58.950
So if you have a specific workload,

00:32:58.950 --> 00:33:00.950
you could give it all the same, you know,

00:33:00.950 --> 00:33:04.950
specific label, and it will pull everything up,

00:33:04.950 --> 00:33:09.950
even if it's in multiple namespaces,

00:33:09.950 --> 00:33:13.350
because you would just do, I believe, minus A.

00:33:13.510 --> 00:33:16.230
So I believe you can pull it up through across multiple names.

00:33:16.770 --> 00:33:20.830
I think that you could try it and see if it will look at a different or maybe going to keep

00:33:20.830 --> 00:33:21.310
bonus.

00:33:21.550 --> 00:33:22.430
Yeah, all right.

00:33:22.750 --> 00:33:26.630
So now, so the second part of this, now we have services down.

00:33:26.710 --> 00:33:27.470
Let's go to D.

00:33:27.470 --> 00:33:30.630
I'm going to create a DNS pod for querying.

00:33:30.730 --> 00:33:30.890
Yep.

00:33:31.030 --> 00:33:31.910
Continue creating.

00:33:32.130 --> 00:33:32.610
All right.

00:33:32.690 --> 00:33:33.190
We're running.

00:33:34.330 --> 00:33:35.110
What do we have?

00:33:35.270 --> 00:33:37.150
Everything looks good in our events.

00:33:37.630 --> 00:33:38.750
It tells us it folded.

00:33:39.950 --> 00:33:42.710
created a canaer, started the container.

00:33:42.710 --> 00:33:43.950
That looks good.

00:33:43.950 --> 00:33:45.510
Now we're going to practice a query.

00:33:45.510 --> 00:33:48.650
We're going to exec using the interactive terminal

00:33:48.650 --> 00:33:51.070
into DNS Utils Pod.

00:33:51.070 --> 00:33:54.530
And that might be, hang on, let me look at some.

00:33:54.530 --> 00:33:59.730
Okay, I think that's a single hyphen between NS lookup

00:33:59.730 --> 00:34:01.990
and DMS Utils.

00:34:01.990 --> 00:34:06.130
I think, yeah, that's unfortunately,

00:34:06.130 --> 00:34:08.470
occasionally my single hyphens and double hyphens

00:34:08.470 --> 00:34:09.190
turn into that.

00:34:09.190 --> 00:34:11.750
So it could be a double hyphen, I think it's a double hyphen.

00:34:11.750 --> 00:34:17.190
When you save it, it has a tendency to go and change that.

00:34:17.190 --> 00:34:18.190
So let's take it again.

00:34:18.190 --> 00:34:21.190
Yeah, we need two in there.

00:34:21.190 --> 00:34:28.190
Yeah, I believe so, but it might be in front of in this lookup.

00:34:28.190 --> 00:34:29.190
Let's see.

00:34:29.190 --> 00:34:32.190
Okay, yep, so that's the way it should be right there.

00:34:32.190 --> 00:34:33.190
Okay, that's correct.

00:34:33.190 --> 00:34:36.190
And so what we're looking for is the Kubernetes.

00:34:36.190 --> 00:34:38.570
the service of cluster. Local.

00:34:38.570 --> 00:34:40.210
And that's similar to when we would be using

00:34:40.210 --> 00:34:42.950
the headless service and we want to query

00:34:42.950 --> 00:34:43.950
through the headless service.

00:34:43.950 --> 00:34:45.510
It's gonna be a similar string

00:34:45.510 --> 00:34:47.350
just having the name of the pod in the namespace.

00:34:47.350 --> 00:34:49.730
That's a default namespace.

00:34:49.730 --> 00:34:53.270
So we looked at Kubernetes in the default namespace.

00:34:53.270 --> 00:34:54.810
And we got a Kubernetes service

00:34:54.810 --> 00:34:57.570
in a default name space service,

00:34:57.570 --> 00:35:00.030
because it is a service, we're querying cluster local.

00:35:01.870 --> 00:35:03.090
So that tells us that it works.

00:35:03.090 --> 00:35:06.170
It tells us that we use 10960

00:35:06.190 --> 00:35:35.950
10. And for our query, I don't know if that's the pod IP address. You can scroll up and see 1096010. I'm sure where that came from. 1096010. So give us a pod IP address. Interesting. Yeah. Oh, yeah. So you could do Q Control Git Pods minus OY. It doesn't look like it. No. So DNS Utills is 244.2.4. So anyway.

00:35:36.190 --> 00:35:41.190
Okay, so it tells us what node is on, node three.

00:35:41.190 --> 00:35:45.190
Okay, so now we've queried internally.

00:35:45.190 --> 00:35:49.400
Now, let's do an external.

00:35:49.400 --> 00:35:51.400
So let's see if this thing can communicate with the outside.

00:35:51.400 --> 00:35:52.400
All right.

00:35:52.400 --> 00:35:56.400
Yeah, so we have internal DNS, we have external DNS.

00:35:56.400 --> 00:35:59.400
And I will tell you something about DNS and Kubernetes.

00:35:59.400 --> 00:36:01.400
It is DNS hungry.

00:36:01.400 --> 00:36:06.400
If you have bare metal and you're using a DNS resolver

00:36:06.400 --> 00:36:09.400
between your bare metal and the outside world

00:36:09.400 --> 00:36:12.400
and your DNS resolver is weak,

00:36:12.400 --> 00:36:15.400
it may work perfectly on browsing or doing anything else

00:36:15.400 --> 00:36:17.400
in your internal data center.

00:36:17.400 --> 00:36:19.400
If it's weak, Kubernetes will find this weakness

00:36:19.400 --> 00:36:21.400
and bring it down.

00:36:21.400 --> 00:36:25.400
So that's something that I tell all of my clients,

00:36:25.400 --> 00:36:27.400
oh, I don't have any problem with my DNS.

00:36:27.400 --> 00:36:29.400
I'm like, well, with browsing, you won't notice it

00:36:29.400 --> 00:36:33.400
because it'll automatically send that query again

00:36:33.400 --> 00:36:34.400
and return your page.

00:36:34.400 --> 00:36:35.400
It'll just be a delay.

00:36:35.400 --> 00:36:36.400
It doesn't give you all of it.

00:36:36.400 --> 00:36:41.760
a pop-up that your DNS failed. But Kubernetes will find that weakness, and it'll break your

00:36:41.760 --> 00:36:46.320
DNS if you have a DNS resolve or if it's weak. You have a strong DNS, nothing to worry about.

00:36:46.320 --> 00:36:50.800
So what are the cautions of running Kubernetes heavy in an internal data?

00:36:50.800 --> 00:36:56.960
All right, we're going to delete the DNS UtilsB. All right, let's do our review.

00:36:56.960 --> 00:37:05.680
In lesson five, we learned how service objects work, the three primary types of service objects,

00:37:06.400 --> 00:37:10.940
cluster IP, node port, and load balancer,

00:37:10.940 --> 00:37:15.640
how to create a load balancer using the mini-cube tunnel,

00:37:15.640 --> 00:37:19.660
how to create a load balancer using metal lb,

00:37:19.660 --> 00:37:23.580
how DNS works within the cluster,

00:37:23.580 --> 00:37:26.720
and how to verify that the internal DNS is working,

00:37:26.720 --> 00:37:29.080
how to verify that the external B.

00:37:29.080 --> 00:37:32.120
All right, any questions on DNS and services?

00:37:32.120 --> 00:37:35.200
So it's kind of neat to see how that's all done.

00:37:35.200 --> 00:37:42.960
And a few more lessons, we'll get into Ingress in the fun of Ingress working with it.

00:37:43.040 --> 00:37:47.540
So I think we're ready for another break because we're going until four.

00:37:47.660 --> 00:37:48.900
So let's take a 15 minute.

00:37:49.000 --> 00:37:50.460
This is our last break for the day.

00:37:51.460 --> 00:37:56.100
And it's 255, so we'll come back at 310, and then we'll go from 310 to 4.

00:37:56.300 --> 00:37:56.680
All right.

00:37:57.180 --> 00:37:57.660
See in a few.