WEBVTT
All in the same training now.
Yeah, I see that you're okay.
That seems, yeah.
And here's.
Okay, so let's see what we were discussing about.
So if you do a cat.cube config,
you will see.
You will see.
You will see this file and it has,
this is our key, this is the certificate,
this is the user, well, the equivalent to user.
So it has a lot of information.
Actually, here are all the clusters that we have installed.
And the tool that we use, the QCTX,
what it does is to change context.
I mean, which are the, in this file,
what you will find is each one of the context
or each one of the clusters.
And you can see that is a general file.
It has an APA version.
And with QCTL config view,
you can see kind of the same,
but in a more organized view.
Okay.
So let's go to our labs.
So is there any question about what we have seen so far?
Okay.
Okay, so let's see the following.
So it turns out that for securing a Kubernetes cluster,
there are like, not different approach,
but different best practices.
And the first one is related to images, to image handling.
I mean, in this situation, what you need to do is
to be sure to use space image that are trusted,
to scan for vulnerabilities, your images,
to avoid, to use a lot of software in your images.
So it's part of that step.
What we're going to do in this lab, in these labs,
there are some labs here, is to see what happened
or what kind of stuff we can find
when you don't do a proper job securing your image
or a proper configuration of your image.
So this first lab, it shows us what happened
when you left by accident or on purpose,
for example, the key or the key, the password
for your database or the key for Amazon or whatever,
which is something super common, super, super common.
So this is the idea of this first lab.
Okay, let's do that.
So let's be at this directory.
I'm going to split this in two.
So we have this folder called labs.
And let's go there, labs.
And please go to the Goat folder.
So we are going to move now to the Goat cluster.
So I need you to execute kubectl kind Goat.
But what this command does is to show us,
to move us to the different cluster,
to the Goat cluster, which is our,
this is a cluster in which we have several scenarios
for hacking purposes.
This cluster have, it's super beautiful.
So we can play a lot there if you want.
So before execute anything,
we are going to check where are we with kubectl.
Get all, dash dash all namespaces.
Which at the end you start learning
or memorizing all of these comments
because it's kind of similar.
Yeah.
What does Goat stand for?
Goat.
Yeah, it's called because it's a project
for from this guy that he created this environment.
But I don't sure I'm looking here.
Actually, this is the goat.
So I'm not really sure if it stands
for anything in particular, but probably.
Okay, so if you run this, get all namespaces,
you will see a lot of stuff.
I'm going to maximize this terminal just for a while.
So these are different pods that are run here.
Please check that all of your pods are running,
that there is no a crash look back or something like that
because that will affect the lab.
These are different scenarios to work with.
We're going to try to work with the most of them,
but probably we are not going to be able to.
There is the services, the services published here.
The daemon said, the deployments.
So just for the sake of curiosity,
if you want to check one of those deployments,
get deployments slash, let's see, for example, this one.
You can take a look at whatever you want.
So I use the health check deployment.
Yeah.
So it shows.
So what is this unknown state?
I don't know what that means.
Oh, this unknown, I haven't seen that state.
This is not the first time.
Let's stop.
Is that a path?
Oh yeah, it's a path.
Yeah.
Let's take a look.
Oh yeah.
Actually, there are three hidden layers.
Two are unknown state and one is running.
Is this the same for you or not?
No, you have only two.
One unknown state.
Okay.
Unless everything is good.
Okay, let's see.
I think this is one scenario in which the image
in one of the layers has information or vulnerability.
So let's see when we reach that to see what it is.
Okay, so checking the demo now,
you can see this is like a standard, it's very basic.
The container, it exposes a port, it's super simple.
And that's it.
So this is scenario, the one that we are going to work.
It shows like a popular mistake, a super common mistake,
which the developers and DevOps execute or fail into
when they package their artifacts or their applications.
So the idea is to test different misconfiguration,
identify an entry point within the web application,
and then see tools to help us to identify
this kind of errors to be able to solve it.
So what we are going to do is to run this script.
Ah, before that, let me show you something.
Just for navigation purposes, with Docker PS,
you can see the docker that we are running.
All these clusters are running on top of Docker,
the cluster that we have.
And this one in particular is the Goat,
which only has one container, one Docker,
but the other have like one worker and one country plane.
And this is script access Kubernetes Goat.
What it does is to export
to one of the ports of this system,
to the ports 1230 to 1236,
the different scenarios that we're going to work.
So let's run this script with that slash access Kubernetes.
So now I've done really get what does this script do?
What this script does is execute a command
which is called port forward.
And with port forward, what we do is to,
so actually, you know what?
No, don't execute the script.
Let's do it now also.
It gives us like a better sense of what we want to do.
So just execute a cat access Kubernetes.
And we are going to expose the first scenario
which is sensitive keys.
In order to do that, we need to identify the pod
that is running that scenario, which is here.
So you can copy these lines and execute it.
So if you take a look, this command,
what this command does is to get pods,
but it has more like more information.
So let me show you a simpler way to do that.
Because if you take a look at the command,
it's kind of complex.
Get pods, dash dash namespace default.
Okay, that is simple.
It says where is the default,
but this is slash dash all it's labeled.
So what is done is to look for pods,
labeled with app build codes.
And the outputs, give the output in JSON,
but what I want from the JSON is this key,
this specific key.
Got it, okay.
Yeah, but I mean, the lazy way to do the same is like,
well, actually we don't need to define the default namespace
because it's the default namespace.
We can do a QCTL get pods, grab.
Ah, actually, the dash all white shows,
I think it shows us the,
this QCTL get pods, grab codes.
So it will show us which one has code in it.
But the thing is that, I mean,
this is easy because there are not too much pods,
but when you are having hundreds of pods,
usually you would want to be sure.
So that is why it is important the labels
and this JSON paths,
because this can be exported not only in Jaml,
but JSON, with JSON you can filter that way.
So what this is giving us is the name of the pod,
the specific pod that is running this, this lab.
So we are going to copy that, that name.
And then we are going to expose that pod
and the specific port of that pod in our server.
So that will be dash, dash address,
view, view, view, view.
So it's like this command,
but the pod name you need to change it, right?
Yeah.
So if it runs,
it gives the terminal lock.
And it's okay, let's leave it like that,
just to test it right now.
And to understand what we are doing.
And then go to localhost.
She's the service that we exposed.
When do I open the browser?
It's over on top, at the top.
And localhost, localhost colon 1230.
So the port is going to be?
1230.
So the objective in this scenario is to identify
the sensitive keys that are exposed,
that are available in the code base of this deployment.
This may include the application code,
the container, the infrastructure.
So here we are using our hacker hat.
The idea is to, okay, how can we do that?
So if you were a hacker and you are facing this website
and you want to explore if this website,
in this service, in this application,
there is something valuable for you.
I mean, like in this case, some keys, some secrets.
What will you do?
What do you think that will be the first approach
to execute?
And take into mind, into consideration
that we are outside, we are, okay, I'm here.
This is what I'm seeing.
What should I do?
What do I do?
I would go and right-click and inspect.
Right at what?
Sorry.
I would go look at the page source.
For example, that's a great idea, yeah.
To see what's in there.
So there are, for example, there are different tools
that we may use.
GrowBuster, it's one tool.
DeerBuster is another tool.
But there is a simpler one,
which is like, actually it's unbelievable,
but it's super common.
And it's that if you, if you,
I mean, you're familiar with a Git rep constructor, right?
Yes, yes, yes.
So where is the information in a Git rep constructor?
I mean, the, like the basic configuration of the rep.
Exactly.
So it's unbelievable, but there is hundreds,
if not thousands of websites
which have the .git.conf expose.
So in this case, this is one.
So if you take a look.
So there are tools that allow us to do this via brute force,
but we are not going to do it right now,
because I mean, let's go straight to the point,
but that's the idea.
So it turns out that, okay, you may think, okay,
but there is not much information here.
So what can I do with this information?
So it's, this is, I mean, this is amazing.
I will say that let's, you have your two terminals, right?
Yeah.
Let's move to our workspace, k8.6,
so the other directory.
Move to this directory, k8.6.
So there is this tool called Git Bumper.
Execute it here, and let me show you Git Bumper.
Ah, well, before that, let me show you one of the,
Robuster.
Robuster, this is a tool for brute force
that allows you to explore your eyes.
It's in Go, so you can download it here and install it.
Perhaps we can do later if we have time,
but this will allow us to find stuff like that.
And this one that we're going to use, Git Bumper,
is a tool that if you have a .git config around
or a .git directory, you can use that to rebuild that.
Go.
So it's super, super cool.
So what we're going to do execute here is Git Bumper,
and we know that you are all, let me,
let me copy, yeah.
That's the bad git.
And we are going to create the folder for this repo.
I'm going to call it like this, it's k8s.git.
Let me copy this for you.
And let's run it.
And this is like, wow.
So, I mean, the first thing that you will need to do is,
okay, let's explore, because I mean,
it's super common to have your website in a Git repo.
That's something, or a service, or something particular.
So what you want to do is, okay,
let's scan all my websites which are running
this kind of platform, or actually in any platform,
and see if that git config file is available.
So this is a typical scan in the internet.
And if that's the case, like this one,
you will have a repo created here.
This is all your code base.
If you do an LS, okay, I can see some stuff.
Let's understand this repo.
So the first thing that you probably will do
is git log to see, okay, what is in there.
And here is, you need to, I mean,
we are in the, like trying to grab the most information
that we can in order to access
or to exploit this information.
Because, I mean, we have a copy of the website,
but that doesn't have any much value.
We are looking for customer data, or keys,
or passwords, stuff like that.
So in git log, probably we may see stuff interesting.
There is actually, I'm going to show you.
There is a commit with environmental variables.
So I'm seeing the git log.
I can see the git status.
It looks like something.
I can do a find to see stuff like here.
I can see something which is typical
in this kind of application is that environment.
In this case, it doesn't show anything.
However, going back to the git log,
and if you can take a look to the git log,
there is this commit which has the,
the message talking about environmental variables.
So take a look.
Take a look and let's, as we have all the repo,
we can check out that part of the repo
and see if it has sense.
So we'll check out.
So if we execute that, we will have the information here.
And in this case, there is that environment.
Execute a ls.o8.sac.
And in this case, you will have that environment.
There is a tool that allows us to execute this as well.
I mean, in this approach, we are outside, right?
We are outside from the system and it was manually.
But we can be inside the cluster.
It's you doing the same for your systems.
You may want to do the same from the inside as well.
So first analyze this.
This environment variable has all the Amazon access keys.
So it's a big deal because you can deploy
a micro cryptocurrency miner
to this Amazon customer as well.
So let's see the other side of this.
I mean, being the administrator of the QNAPs cluster.
So in this case, let's explore the thoughts that I have.
We are talking about this application,
the real-curric deployment.
So we can have a shell inside that pot.
So in order to access that pot, there is this command,
exec, which is the same line as in Docker.
Dash IP, dash dash, slash.
So let me copy that for you.
Now it's faster my system, so I can't.
So if I execute that, it will take me to inside the pot.
So I'm inside the pot.
I can run some commands here, ps.
Let's take a moment here to explore what is a container.
So the first thing that you will notice is
if I execute ps or ps exu,
it will show me like two, three processes running.
If I go to a different terminal,
let me send this to the ground.
If I go here and execute the same,
I mean, you're talking 520 processes.
So that's the first difference that you will notice.
Now the second difference here is,
let me show you this.
Let me show you this.
The first thing that you'll notice here
is that the process number one,
the PID number one in a Linux system
usually is the init because it's the one
that executes the services within the system.
But in this case, in this case is the application process.
So yeah, so when you are inside a container,
you don't have too much tools,
depends of the container or the cost.
For example, this one has a lot of tools.
I mean, a ping, you are not supposed to have a ping
inside a container.
That's a bad practice.
You need to have the less developed.
Let's see what else notice that.
Okay, this has a lot of stuff,
this container, this image.
If I take a look at user read,
there is some ton of stuff.
So there is another lab in which we are going to exploit this
to access the host.
So, okay, in the meantime,
what I want to tell you is this, which is super important.
As we are so limited in comments in a typical container,
I want to know the most possible about an application.
So we are going to use like
the Linux knowledge that we have.
And it turns out that there is this system directory
called proc from processes
that has the common line for everything that is running
for each one of the processes.
So I can see what was executed to run this application.
But the few common line, in this case, is super simple.
So it doesn't show too much of the same.
Okay, this is, okay.
So I was telling you that there is this utility,
which is called truffle.
It's installed here for the,
just for demo purposes.
But this tool, and let me show you the report.
It's a tool that allows you to scan credentials
in every place.
I mean, you can look for credentials in GitHub,
in an image, in a directory, in a website.
So if we use it here,
hopefully you will be able to show us something.
So it does do a crawling down there.
So the solution for this kind, I mean, this decision,
okay, it detected the credential.
So the solution for this, I mean,
as a part of your pipeline,
in DevOps, the CI CD pipeline,
the idea is that you execute this kind of utilities
to scan for your images,
because, I mean, this is just an accident.
The developer forgot to include that in gitignore file.
So that may cost a lot at the end.
So this kind of stuff is what is supposed to be avoided.
So can this kind of scan be a part of our cluster definition
itself that when...
Actually, that's a great question,
because yes, there is some policies.
I mean, there has been a lot of changes here.
There was something called pod security policies
in which you can define some policies
to handle this stuff like that.
But that was actually in the version from the last one,
was depreciated.
And now we're discussing about policy,
policies security admission process.
Yeah, policies security admission.
And in this case, it's like, let's call it a filter.
And this filter, when you send something
or try to deploy something,
it will check that it fulfills the policies
that you have defined it to be fulfilled.
So it doesn't check at this level,
but it checks if, for example,
the image was scanned or not.
So if the image wasn't scanned,
it won't allow you to deploy it.
So in this particular case,
what you need to do is in your pipeline,
when you are creating the image
that is going to be deployed inside QNets,
to do this scan, to check this stuff.
Yeah?
Yeah, yeah.
Okay, okay.
So let's do that as well.
Okay, so now are we going to...
It's okay, right?
Or is there any question?
So let me, let's bring that process from the background.
Background, and let's stop it.
And now that we have clear what we are doing
with this script, we are going to run it completely
because that will be easier to navigate.
So just stop it.
You bring the process from the background with FG,
you can bring, okay, and control C, perfect.
And now we are going to run this script.
Okay, perfect.
So go to this URL,
and this is the QNets code.
I mean, so there are a lot of labs here.
We are going to do some of those.
But I mean, as you can clone the rebel,
you can do it then by yourself and explore more.
So we just did this, the sensitive keys,
which is kind of beautiful.
Let's do the next one.
Let me check if this is the next one.
Let's do this one, this one.
The container escape to the host system.
So this is the 1233.
So let's go to local host.
It's slow, it is slow again.
Perfect.
So this is one of the,
I mean, the most dangerous attacks
because a misconfiguration,
a common misconfiguration in QNets
or in a container environment
may give access to an attacker to your host system.
I mean, the host system will be the actual server
which is running the Kubernetes nodes.
So if you have that access,
I mean, you can do a lot of bad stuff inside your company.
So this is like a big case.
And well, this applies a lot of stuff.
So the idea here with this scenario
is that we're going to exploit a container
and escape out of it.
I mean, to go up to our host system.
And this will show us why it's important
to run all the containers with non-root users,
which is actually, I would say is the most common setup
to run containers with privilege as a root user.
It's super common and this is super bad.
Okay, so in this scenario,
we are directly in the container.
We have access to the container.
And this, you can imagine that,
I mean, if your application,
the application, the web application or the service
has a vulnerability that was exploited
and was able to provide a shell to an attacker,
which is actually fairly common.
And this will be like the second level
what we are seeing here.
I mean, here we are assuming
that there was a vulnerability in your application
that allowed to an attacker to get a shell
from that container.
So if that container is running as root
as this is the case,
you can try to access the container
of the host system.
So there is a technology in Linux,
which are the capabilities.
Are you familiar with cellings?
Have you ever used that or hear about it?
I mean, we have our sys admins
and they do implement a serenades.
So I know that it is very restrictive.
Yes.
Sometimes it makes our lives really difficult,
but I think that's important as well.
Absolutely, absolutely.
Because usually there was this time
that everyone preferred to disable
or not enforce cellings,
but that's like losing a lot of power and control.
So the Linux system has something called the capabilities.
And these capabilities are, for example,
the permission at a kernel level
to execute a ping,
to become a super user,
to write on the file system.
So it's really important.
So what we are going to do is use a tool called capsh.
And I'm going to show you here capsh.
It comes in several district.
So it's a shell that allows you to access
or to interact with the capabilities of the Linux kernel.
So it shows you what capabilities are available.
So with capsh-print,
you will see all the capabilities
that has this container.
So the idea here is that,
I mean, this container has two things here.
One is that it's running as a root user.
And two, it has basically all the capabilities available.
If you take a look, for example,
the capability of execute change owner.
So you can change a file to a different owner.
The capability of change root, system change root.
So we're going to use that one.
The capability of kill a process, for example.
So they will be able to kill my application.
And if you read everything here,
these are all the capabilities to install software,
to change the network.
I mean, it's super, I mean, it's the power of Linux at the end.
So one of these capabilities,
let me see, let me see, where is it?
The idea here is that, okay,
let's see which of these capabilities I can use
to gain further access to this victim
that I'm attacking.
So if I execute amount comment,
and that is because I have the CH,
the change root capability available.
You know, in Linux, there is this concept about root,
changing root.
So I can read the root on a system,
but if from this system, I can access another file system
and access locally,
I can move from my current system to that other system.
Yeah.
So this is what is used when you rescue
a Linux server that has failed.
So you start your server with a USB drive or with a CD,
and the USB drive starts a Linux system,
but in the hard drive or your server,
there is another Linux server, another Linux system.
So from the USB, you access the file system
in the hard drive, and you change root,
move from the USB to the hard drive,
and work in the hard drive and do stuff.
So what we're doing here is the same.
I mean, the container has a root,
and the container has access to the file system.
For example, if the container is a web application,
and the web application, the files of the web application
are located in the local file system of my dose,
and are shared to the container.
So I can change root to that, and from there, do stuff,
or see what I can do, depending on where I am.
So that's what is happening here.
So one, okay, here.
If one take a look here,
you will find the slash host system.
Oh, wait, it's low again.
So this host system is the,
well, my host system, I would say,
is where the container is running.
So if I take a look, oh, that's slash,
okay, it's there, host system.
If I host,
but wrong host system,
to see what is there.
Okay, it's not.
But if I host, but I'll ask, slash, that system.
So this is huge because,
like this, I can see what is inside that file system.
So I'm in the Docker, I'm in the container, sorry,
and the container has a folder
that reflects the file system underneath.
So what I'm going to do is jump from here to here.
This is what we're going to do.
So this is, change root is the command,
and host, dash, system.
And after that, I'm going to tell
which command I want to run.
And the command that I want to run is dash,
we see which is the standard shot.
Actually, before executing that,
we can explore a little bit that file system.
Let's see, okay, let's see if find exists,
host, file, system.
Yeah, it exists.
So it shouldn't be there.
That's, I mean, the less your image has, the better.
I mean, it doesn't have, it should have nothing.
Nothing beyond what your application needs to have.
That's super important.
So if I execute the file, I mean,
I can explore and understand everything.
And actually, what we are seeing here is this system.
Yeah.
Is this system.
So let me stop here and...
Okay, I can see all the commands that are here.
So anyway, I'm going to change root now.
Host, dash, system, bash.
Okay, I mean, it doesn't seem like nothing.
There is like nothing happened there.
But if I execute an ls, I will see,
okay, something is different here.
Yeah, actually, I even can ls home,
for example, to see what is inside.
Okay, it didn't show anything.
ls root, let me see.
ls bar.
ls etc.
Okay, so it's all that information.
So let's think again.
I mean, we had access to a shell,
which is running in a container.
And we are exploring what we can do from that container.
As of right now, we were able to jump to the host system.
So what I can try here is that,
as I know this is Kubernetes,
and usually Kubernetes uses Docker,
I can see, okay, let's try Docker, Docker PS.
So it didn't show anything.
I perhaps will have to use the full group.
So let's use our,
actually, let me see.
Okay, with, I mean, as I change root,
I usually don't have the path set up
and the environment variable set up.
So if I execute su-,
it makes me root with the proper environment variables.
So that way I will have better access
and don't need to know the path to all the files.
So let's try again, Docker PS.
Oh, it's not there.
But I can see here, I mean,
with PS, I can see a lot of stuff here.
So probably is that I don't have
the proper root or something.
Let me,
okay, Docker environment.
Let's see that file.
It didn't show anything.
Let's try,
okay, it's not there, but let's try this.
Let's see if it's in another folder.
Oh, it's not there.
I just realized something.
Okay, for example, I just find the cube add conf
and perhaps I will have the that cube conf it's here.
Let's see if we have it.
Yes.
Ah, but there is no conf there.
Let's see if cubectl is there.
Okay, I do have cubectl,
but it's not necessarily connected.
By default, cubectl
has the, in barely has its configuration.
So we can take that.
Okay, so let's see.
I mean, this must have, actually I'm wrong.
We are not in this system.
We are not in this system, but in the Docker,
in the Docker server that is running.
Yeah, that is why we don't have Docker
because we are here in this Docker.
So there is no Docker inside this Docker,
but we do have other information.
So,
so let's go var new.
Okay, so here is cubelet.
So if we execute psaxgrub cubelet,
we can see all the information in the cubelet
and we can see, for example, the cubeconfig files.
So what we are going to try here is,
let's see if we can find and save cubelet.
There is config.
Okay, and this is the cubelet file.
So we cubectl dash dash qreconfig.
Confid is, yeah, config.
Then all dash dash all the other spaces.
Let's see if it works.
No, let's see, that is not that fine.
It's the same error, I think you're saying.
It's running.
I'm going to run it, let's try this.
It should be,
no words here, because what we can do here,
once we are able to find the cubiconfig,
is that we can deploy another application here.
We can deploy in this node another application,
which is kind of huge.
I mean, this is the AP server.
Let's try this one.
So qctl.
Cubiconfig, cubicon.
Let's see if it works.
So it worked.
Okay, so it worked with this one.
So what we're going to do is just try to deploy
Hello World application.
Just to prove the point.
The bad thing with this stuff is that you lose track of time.
So the idea is that we went for lunch at 12.30, yours,
but I guess that we can go at 1 p.m. for you.
It's okay.
So qb.com.
I just sent you the comment.
So let's try to deploy something.
Let's see if it allows us to do that.
Before that, let's see the notes.
Okay, we are able to see the custom.
Let's try to deploy an application.
Give me just a second because I'm looking for the young file.
Let's see if it works.
Need one, I found this one.
Okay, so.
Oh, there's enough.
You need to copy by hand.
That's a normal.
Yeah.
Actually.
Well, yes, because I have to wait a second.
No, but I can't copy given this is a time.
Let me check that I have it, if I have it right.
Well, it's okay.
Okay.
Let's see if it is able to deploy.
Oh yeah.
Well, that's the best way to learn.
Let your fingers to learn.
I have something that I call password fingers.
I mean, I don't remember the passwords,
but my fingers do so much.
I let them write and time.
So what exactly, this is just a simple hello world.
Application.
Just a basic deployment.
Yeah, that is right.
Actually, you can open the file.
Okay.
Take a look.
Oh yeah.
Oh, it didn't, I mean, it failed probably,
ran or something.
But I think that, yeah.
Let me to create something,
actually something simple.
Should I type the entire thing
or are we looking for another?
No, try it to run it, because it went in my case.
I'm not sure perhaps in your seat it's able to deploy.
Okay.
I'm going to try to deploy it in English.
I think it's in English.
So here it's able to run.
Okay.
I mean, in this case it failed as well,
but I think that we have the idea.
Obviously, I mean, there are plenty of options
to try this.
Let me think about an image,
because it is smaller than English.
In this case, it was require something.
Okay.
I mean, in this case,
you will have to make some changes
to the configuration to the control plane.
But I mean, you have authentication against the API server,
the kubectl works for you.
I mean, it's like obviously limited,
but strong enough to install a crypto miner for sure,
I guess.
Okay.
So I don't know if there's any questions regarding this.
A lot or not?
No, no, no.
I mean, yeah.
I mean, if it had run,
probably I would have a better understanding
of how exactly this is going to impact.
But what I understood is that for this to work,
you at least have to be inside of the Kubernetes cluster.
Not really.
In this case, you are inside the container.
The container.
So let's see that you have an application,
a Node.js application or Angular or whatever.
And that application has been compromised,
has an exploit.
This exploit allows you to get a shell in the container.
Yeah.
So I mean, that depends on your application percent.
So you were able,
the attacker was able to gain access
to your container via shell.
So that's an issue.
An application problem in the code,
it wasn't properly secured, whatever.
So in this case,
the attacker exploit a Kubernetes
or a container misconfiguration
in the sense that the container was running as root,
has all the capabilities of Linux enabled.
So he was able to, from the container,
jump to the host systems.
And once inside the host system,
which is the worker node or a control plane node,
he has access to everything that this node has,
which is the ability to run more containers,
to connect to other containers,
to destroy the node, remove everything, for example.
So that's where it's big.
And we are doing this because in the next,
in the next labs, we are going to eliminate the capabilities
to avoid the use of root.
So no matter if your application has an exploit
and is able to give a shell,
the person won't be able to do anything
because it's secure your container or your cluster.
Yeah, so I think you already answered my next question.
So basically, I mean, even if you have the vulnerability
to gain the shell access,
as long as that itself is secured,
hackers cannot do, I mean, of course,
you need to secure your one as well.
You need to make sure there are no vulnerabilities
but at the same time, multiple levels of.
And that's the biggest issue with Kubernetes
because I mean, there is the application layer.
So the application per se, that's a whole another story
about how to secure that application.
You have the platform or the container
where your application is running.
So this container, which is the one
that we are talking about,
behind that you have the Kubernetes cluster.
Behind that or beneath that,
you have your virtualization environment
or your platform, or your servers.
So it's like four layers that you need to cover.
And actually, that's the disadvantage that we have
because the attacker, he only needs one entry point
to do whatever he wants.
In our case, we need to see everything.
We need to check the application, the container,
the cluster, the server, the virtualization.
It's super hard.
It's a lot of stuff to analyze
because in the first lab,
he was able to find just an entry point.
And from that entry point, he gathered more information.
In this case, the same.
He was able to cross one of your defenses.
Look at this in that way.
There are multiple lines of defense
and he reached the first one.
If you have a second strong line of defense,
okay, he did that, but he wasn't able to do anything else.
But in this case, he was able to go inside
and go deeper and deeper.
And we can stay here the rest of the day doing the stuff
and we will be able to do it.
So that's it.
That's it.
Okay, so let's see us in one hour
and have a good lunch.
Sorry.
And let's keep going.
Okay.
All right.
Okay.
Thank you.
See you then.
Bye.
Hi.
Hi.
Okay.
So let's move on.
Do you have this side come on?
I was able to deploy the application.
I'm going to show you.
Okay.
So what went wrong there?
It was, I mean, there was more QA config files
in the same directory and one of them worked.
So give me just a second.
Okay.
Show you.
Okay.
So.
So now we're going to,
let me show you first.
So, okay.
So I was able to deploy this, this sentence.
It turns out that in, I mean, the error that we saw,
it's because the, actually for security reasons,
the QBlet QB config file, to name it somehow,
has this restriction.
It only allows to create mirror pods to our data.
What we are trying to do.
Just exactly that.
However, in the same directory,
when it is secure nets,
there are more files and QB configs.
And I use this admin.
And with that admin, I was able to deploy,
to deploy another,
another pod.
So, tell me.
Go on.
Listen.
So with this admin QB config,
we can try the deployment.yaml that we were trying.
Well, actually, yeah.
Okay.
Oh no, I have to dive a lot.
Yeah.
It's super hard.
Let me look at the history.
This one, the hello world.
That's true.
Yes.
Yeah.
So we need just to change the master.
Oh, oh, master.
Yeah.
Oh, what happened?
This admin, that's what I remember.
Ah, yeah.
Sorry, it's admin.
I got confused.
I think that is taking some time.
Yeah, it's kind of like,
actually, I think I was able to copy.
Yeah.
Let me short that because I was able to copy the text.
Okay.
Interesting.
It's there.
How did you copy?
Controls?
Yeah.
Okay.
So in theory, control V should work as well.
It's here.
So we have run two scenarios so far.
The first one was like
a misconfiguration in the proper image.
So it needs to be addressed by,
when you create the image, scan the image
to see if stuff like that happens.
The second, which is this one,
we are addressing here two things,
which are the security context.
One is being able to execute as a,
or the container being executed as root.
And second,
the capabilities available.
The container has access to everything.
And next we're going to do one related to network policies
to be able to access from one application
to another in the same cluster.
And then the users permissions.
And see how it works.
And then we start to create policies to block that stuff.
Okay.
It's taking a lot.
We can leave it there and see later.
So let me show.
So what we're going to do now is this scenario,
which is called Kubernetes Namespace Bypass.
So it turns out that
by default, Kubernetes has a flat network.
Almost everyone assumes that if my application
is in X namespace,
there is no access to a different application
in a different namespace.
I mean, if I have namespace X and namespace J,
they can communicate in the standard Kubernetes
with the flat network.
That implies that, for example,
if I have an application which has a cache store
or a local database or some data or something,
I can access from a different application to that.
So if in your cluster, you have 100, 200 of applications
and one of them has a vulnerability
that allows a hacker to somehow exploit or access or wherever,
that means that this hacker will have access to everything,
to all that you have in your cluster.
So that is pretty good.
So actually a lot of the exploits or the hacks
that have happened has been because of this,
because an application had a vulnerability,
it was exploited and then they access
to a different application within the same cluster.
Okay, so we are going to see,
well, that flat schema of the working
and see what we can scan in the network
and see if we can find another stuff
or another systems as well.
So in order to do this, what we are going to do is,
let me go here.
Okay, it's failed.
Well, this is a silly error to be honest.
It's here.
It says that it failed because it wasn't able
to find resource mapping.
And there is no matches for kind,
the type of object or resource deployment
in extensions V1, V1.
This happened because probably this manifest,
this deployment is too old.
The API version says extensions.
And right now, the deployment is already in app version one,
I guess.
So it's just a matter of correct this deployment
and correct the API version.
So this is a good thing.
Okay.
Okay, so let's see our pods.
Let's go again to our system to get pods.
So there are different pods.
What we are going to do is we're going to launch a pod
called Hacker Container.
This pod has, well, it's for this lab.
It has a lot of networking utilities
that allow us to see, to explore the network
within the cluster.
As it is.
So for doing that,
sorry.
And we are going to run it.
Hacker Container.
Let's go.
Let's launch Hacker Container.
So we can copy this for you.
Okay.
Let's see the pods here.
Actually, we can see the pods that we created
to the other side here.
I'm going to remove those.
Okay, you should have by now the shell.
If you don't see it coming from, just press enter.
It should show you something like this.
Let's do it.
And
just a second please.
Sure.
Okay.
So,
okay.
You can remove that.
Oh, no, perhaps you didn't create it.
I didn't create it.
Okay, no worries.
So we are using this container.
It's a shell, we don't have the dashboard here.
Let me check.
Let me try to install the dashboard in this cluster
because it would be useful to see easier.
So let's see if it works.
Let's try to learn it.
Let's try if it installs.
Okay.
Okay, I think that we, let's see if we were able to.
Okay.
Let me copy the comments here for you.
It went backwards.
The one that I sent is the last one.
Yeah.
I need to be in this folder, in this directory that I am.
Okay.
When you execute this last one,
this is something called a token,
which I'm going to explain that later.
But actually what we just,
what we are doing here is first create a deployment
with the dashboard.
It has like two or three pods and a service
and some stuff for the dashboard to work.
And we are creating a user, a user that,
that we roll base access.
And we are assigning a role for that user,
linking the user that we created
with the dashboard in space.
So he can handle that.
And then we are creating a token for that user
to access the dashboard.
Okay.
So these.
Like how you do it in database,
like you create a schema and then you create a role
and then grant privileges.
Perfect analogy.
Exactly, exactly.
So this will show you a token.
You need to copy that because we will need it
and execute the command to proxy.
Okay.
Ah, but before the, before cubitctl proxy, sorry.
Before that, let's see if our.
The dashboard.
The dashboard was created.
I don't see it.
Ah yeah, I did see it.
Cubernet dash dash board.
Oh yeah, and it's running.
So it's not failing on us.
So cubitctl proxy.
This will open a port to,
in order for us to access the dashboard.
Oh God.
Yeah, because that way we can.
So what kind of role is granted?
I mean, you can check the YAML file.
Yeah, sure, sure, sure.
You can just open the, the,
the visual code in that directory
and you can take a look.
But we are going to do that step by step.
So, but it's, take a look.
Yeah, so it says the user, the object,
because actually in Cubernet,
there is not such thing as an user,
but it's like the equivalent.
So the cluster role, which is cluster admin,
and it has control over the whole cluster.
And the service account, which is the admin user.
Service account is, it can be a user or an application.
Yeah.
So that's where you are linking that
with the API of authorization.
Got it, yeah, okay.
Okay, so here, let's go to the,
to the browser.
And it should be in the,
in the, you need to run first kubectl proxy, right?
No.
No, I'm sorry.
So.
No.
Oh.
Oh, this is a bit,
I'm not sure what are you doing.
Open up the dashboard.
This is the, on the API actually.
This is the URL that you need to open.
Yeah.
I just,
copy that.
I already opened one.
Okay.
Here.
I send you the full URL to the chat.
Okay, okay.
I need to do a talking.
Okay, thanks.
So how do you remember such a big URL?
No, no, no, no, I went to the,
to the GitHub repo.
And it's the, but because actually it should be shown
or print by the kube proxy common,
but it didn't do it.
I'm not sure where.
Because it's like, wow.
But if you take a look,
it's not that hard to decipher.
I mean, because it's the API, the version on any space,
the kubectl dashboard in space, the service.
Yeah, but you have to remember the sequence of how.
Yeah, absolutely.
Yeah.
Okay.
Okay.
So, finally, let me introduce you the dashboard.
The Churnets dashboard.
So just a quick review as we weren't able to do before.
So here in the top bottom, in the top left,
you will find like a menu.
It shows you all the names, faces available.
It's set in default.
But if you choose all names, faces,
you will be able to see everything here.
The daemon set, deployments, the jobs, the pops,
the replica sets.
There are three pops failing or that fail previously.
And actually we do have the 21 pops running,
which is, it's the quell into quell 21 servers, right?
So the daemon, the deployment, deployments,
all of the stuff that is here, the jobs.
And if you remember the API, it's here.
It's the same structure.
Wow.
It's there.
Okay.
Yeah.
If you want to take a look on the pops.
So if I want to take a look of the pops
for a specific namespace, I can filter later.
And in these three buttons, you will see, for example, lots.
So it will show you the lots for this part.
Okay.
And going back, you can, for example,
which will give you the terminal,
the exact the same cube CTO exec dash TI
plus dash dash dash is the same here.
So I'm not sure if this pop has another show,
so let's wait a little bit.
Because actually, if you think about it,
you will realize that there is no real reason
for a container to have a shell.
I mean, you want to run an application,
a Python application, a Node.js application,
a Java application, whatever.
But is a shell needed to run the app?
In a lot of cases, there is no need for that.
So just doing that, you will reduce the surface attack
vector a lot.
So I'm not sure.
So how do you avoid that?
Come again?
How do you avoid that, getting the shell on?
When you prepare the image, I mean,
you can use an image that doesn't have show.
There are a lot of actually, let me show you.
The image available are the ones for alpine.
Alpinings.
So they have a lot of different images that you can use.
These are, these ones, the mini root five systems.
Or you can build one in your case.
It's not showing anything.
I mean, there is one or not, a shell or not.
Let's try the other one.
It showed error as well.
So perhaps it's...
So what we can do is just go to the container
that we want.
Okay, the hacker container.
Yeah.
Yeah, that one.
Let's see, that one.
Okay, it's here.
The frame.
Okay, so here.
Okay, so it showed up instantly.
So it has a shell.
Well, actually I made sure because it seems different.
Oh yeah, the one that we run here was pure shell.
The one that is here is bash.
Anyway, so.
If you run ls, you will find it's too small.
I need to move here.
It's too small.
So.
So here you will find some utilities are available.
Actually we can use this container for other,
during the training.
Well, this is kill control.
This is a char, well, it's...
This is for checking 3D levels, installation.
So there are a lot of stuff here.
And in user being, there are different tools.
Let's see if there is an end map.
Yeah, there is an end map.
For example, have you ever used it?
Are you familiar with it?
Okay, Nikto, Nikto is a URL scanner.
I mean, this is able to process our site.
So it may, for example, this may work with the first lab.
Okay.
There is another one that you checked here.
I have to check if it's here.
Yeah, there is another tool, scene map.
Which is,
which is, it's similar to the end map.
What else we can see?
For example, we can, let's see if we have IP,
IP address show.
So with IP address show, you can
see what IP address does this container has.
So you can see the network.
So what you can do is let's do an end map SP.
I recall what, 2.4.0.0.0.24.
Let's do,
let's see what other hosts are we able to see.
So we were able to see 16 hosts.
Yeah.
Let's try what address do you have there.
With IP AS, you can see.
So in this scenario, what we have is,
I mean, if I'm a hacker and I'm trying to understand
the network I have access to,
what I will do is first look at the IP range,
which is what we just did.
Then let's see what other machines are available.
And here you will see a lot of stuff.
And actually what I want us to see is that
we are in this container and in this container, sorry,
and we are mapping all the cluster.
And we were able to see,
if we take a look at the dashboard,
the namespaces here.
You can duplicate this tab so you don't lose the shot.
With right button, duplicate tab.
Take the duplicate tab, okay.
And that's where the namespaces are.
Okay, here, cluster namespaces.
So there is this, all of these namespaces,
one, two, three, four, five, six, seven, eight namespaces.
And if I take a look at the pods,
each namespace has several pods, pods, sorry.
So what we did here was,
we enumerate all those pods,
or the one that we were able to see.
And you can see that we are in the namespace,
this container, but we are seeing stuff
from the dashboard namespace,
from the World's Chrome, from the HOOP Keep system.
Actually, what we are seeing here,
this is super important to understand.
This is the service discovery
domain name standard.
Every cluster, any cluster, Kubernetes cluster,
it always will be service cluster.local,
as its domain name.
And it will have the namespace,
and it will have the application,
or the service, or the pods that is running here.
In this case, it's showing us as well the IP address.
So whenever you see this,
you will be able to understand that
this specific IP address, it's at this namespace,
and it corresponds to this pods, or to this deployment.
So we are seeing in the Kubernetes dashboard,
the default, these services,
which are at the default namespace.
These are from the Qubes system namespace.
Let's see.
It sounds all of those.
Yeah.
So for example, if we want to explore,
if we want to explore
a specific, for example, we will,
it's so common in Kubernetes to have MongoDB's databases,
or have Elasticsearch,
or have a MySQL, or Redis, or wherever.
So you can scan all the network
looking from a specific port, for example.
Within that map, we can do that.
We're going to try CMAP,
and we're going to look the Redis port,
which is this example for this lab,
but we can look, can you see?
Ah, sorry, and this is the network to 44.0.0.
This is able to create a file, a CSV file,
with the information.
So let's run it.
Wait, we need to just,
the file, I'm not sure if there is a VIN here, let's see.
If not, we just use a macro, that's it.
Okay, it's V.
The blackest.
Yeah, I'm shooting.
I think, perhaps it's because of this.
I just commented the file.
Wait, are you familiar with V?
Okay, cool.
Yeah, okay.
We all went through that, trying to escape V.
So, okay.
Okay, so this is scanning,
and the open file,
it shows us where is that ID.
So we can use nmap, if I remember correctly,
is a sp, the same part.
6, 3, 7, 9, and I don't remember the syntax.
This here is, I think it's the same.
It's the same.
Yeah.
Yeah.
Oh, we're good on time.
So, but this is the difference with the cmap,
which is kind of cooler.
Yeah.
Easier to read, because it only shows you
which one is open.
Yeah.
But we can, for example, let's try with the cmap,
where we're in the 80s.
Okay, and it shows that there is three open ports
in different hosts.
We can, I'm not sure if hosts, yeah,
it hosts here, so I can try to get the name of those IPs.
So I will understand what are they,
because I mean, this is so descriptive,
the idea that you're deploying your application
has a frontend, backend, database.
So usually the database will be called Mongo,
or MySQL, or Postgres will be the name
for this deployment.
The backend usually is called backend,
or perhaps the name of the microservice,
if you have several microservices,
and it's descriptive because it shows what it does,
OCR or authorization, whatever.
So analyzing that, you will get more insights
about your system.
So.
And this is, I mean, obviously applying this
to our own systems actually is super cool
because we do like an insider,
and you will be able to discover what is happening inside.
So let's see, in this case, this is a middleware,
this is a service called cache store,
so it has information.
As we were looking for Redis specifically,
we can, it can be a Mongo,
I don't think, I don't believe that there is a Mongo here,
but we are going just to connect to that Redis,
and it's there.
So if I want to see,
let's keys.
So for example, this is, these are the keys saved on Redis,
and if you want to get that,
you will be able to have it.
So you can like,
explore and obtain the information
that you are looking for.
So let's see if you can connect there too.
This is a,
this is,
this is,
yeah, is there something like passwords?
Yeah, exactly.
So it may be some passwords or something like that.
So let's do our last lab of this kind of last.
We're going to do a role-based access control
misconfiguration.
So we're going to see what happens
when we don't have a properly configured
role-based access controls here.
So let me show you.
So the idea here is we're going to talk
with the API service server using the REST API,
and I mean, do what we have already done,
but checking the role-based access,
see how to access it, permissive permission,
or misconfigured permission,
allow us to gain access to more information
within our cluster.
So in order to do that,
we are going to this,
this scenario,
which is running in the port 1236.
So let's go to localhost,
1236.
So this is also super common because, I mean,
we tend to give users more privileges
than they really require.
So just because, I mean, I've been so granular
sometimes it's so cumbersome,
so you just give the root access
or the high level access.
So we're going to see
what do we have here.
So in this case, let's see the environment
in this print file,
in this deployment.
So the print environment show us
like the variables, define it.
Let's see if we see something interesting here.
Obviously this is running with roots.
The idea here is to find a secret,
that is not properly,
I mean, that is accessible to everyone
because the role-based access control privileges are wrong.
So there is one of them,
let me show you here,
I'm going to jump.
Let me show you this.
Here.
So in Kubernetes,
and there are no like a new server account percent,
what we have is service accounts.
So these service accounts are using,
I will say by everyone,
by every kind of object or resource.
But at the end, one of those objects are us, the humans.
Other objects are the bots, for example, or applications.
So a service account,
it's something that is authorized by the API server
when it comes from a bot,
which uses that.
So each bot, when it tries to request something
or to report something via the Kiblet,
needs to be authenticated
and that is done by the service account.
So by default, every bot uses the default service account
for the cluster.
So there is like one service account,
well, actually like two or three service accounts
for the whole cluster to handle the access
to, for example, the registry where the images are,
to the API server,
to any other service within the cluster.
So these service accounts usually are associated
with the cluster, with the namespace,
are tied with a role that gives like the scope of action
of that service account,
which is what I want to show you.
Usually the token for these service accounts
is stored at this location.
Yeah, if VaronSecrets or QNets service accounts
or a similar location as well.
So what we are going to do here is let's find this.
So this is like a standard for every container.
So here, secrets.
VaronSecrets.
Service account.
And here it runs.
There are these service accounts.
This information here, there is the certificate,
the CI, and there is the token.
The token is like the password,
like the way to access that user.
So having this token give us power
or give us control over the API server.
If the token has enough privileges
or the role that it has associated with
has enough privileges, we can do anything.
I mean, wherever it's available.
So that is why it's important
like to have the less privileges possible.
So let's see.
So first let's-
But there's no need to avoid
storing token in a location.
No, no, because it's needed by the bot
to request anything to keep led or to report anything.
So what needs to be done is that
that token only gives access, for example,
to this namespace, to this bot,
to do, to read or to send this information
and nothing else.
So not access.
I mean, this token should not be able to have the power
to create another deployment for just to give an example.
Okay, so this is the token.
That is there.
And let me check because what do we need here?
Let me go here again.
We need to be able to talk with the API server
to communicate using to let to the API server.
The API server requests us token to work.
So what we will need to know is where is this API server?
We don't know.
We need to like an IP.
Usually the port is a standard.
So we will need to do that.
So let's see.
So, okay, training environment.
It can be here, I guess.
QNetservice part.
Okay, QNetservice cost.
Yeah.
This that should be the API server.
We can check with scope.
HTTP as, as long as this API address.
The port it says where is it.
The port is 443.
Let's see if we can see.
Okay, it says that the certificate is not valid.
So we need to dash K.
Okay.
We are not able to access the API server
without authentication.
It's forbidden.
So we need to authenticate.
So what we are going to do is to using call authenticate.
So first we need the certificate.
Which is, what is the one?
Sequence, QNetservice account.
So there is this certificate.
This, in order to connect to the API server,
we need to send the token in the header.
Like probably you're from,
you're, are you familiar with JW token, for example?
Yeah.
Okay, so it's kind of similar to that.
So in this case, we are going to use
the authorization here.
We are going to send the token and see what it is.
However, let me save the token in a variable
because it's too big to paste here.
So, okay.
So let's export, okay.
One, two, three, four, five, six, seven, eight.
I don't have, let's see.
I'm going to come out.
I need, it doesn't let me,
the parent prefix.
Ha ha ha.
What else can I do?
Okay, I think this is all.
Okay, I have it here.
I can copy it from here, I guess.
So let me see this.
It's not typing.
It's not typing.
It's not typing.
It's not typing.
It's not typing.
You know what?
I guess that we can access the same container from the side
because I'm going to open another tab
and let's see.
Kimctl, good parts.
What's the name?
Hungry check deployment.
Is that the?
What's that?
What's that?
What's that?
Oh, that's, that's checking.
Yeah, and it's hunger.
Let's see.
I can see the IP address is here.
Ah, no, I know.
Perhaps it's in another, in a different namespace.
Oh yeah, it's here.
Big monolith.
So let's connect here because we need to be able to copy.
So this will be gctl,
accept-it,
the name of the pod,
dash, dash, and let's see dash.
The name is going to change.
Yeah, okay.
Okay, I missed the,
where should I put that?
Man with monolith.
What is it?
Perhaps it's,
okay, yeah.
So,
okay, okay.
So we are here.
Let's confirm that we are here.
Yeah, R4, R4 is busy.
Okay.
Okay.
Let's,
wrong, wrong.
Where's it, that one, secrets, secrets.
That one, secrets.
Okay.
Okay, yeah.
Now we can do it.
So, export, talking,
info,
sign roller,
parentheses,
touch right.
Let's see with echo, sign roller token,
it should show us, yeah, it's working.
Let me, okay, this one here.
So,
works.
Okay, and we need,
I need to remember the IP address for the,
for the service cards.
So, okay, let's go.
Call,
dash, dash,
CSRT,
CSRT,
and var, wrong,
secret,
secret,
keyword, service account,
CSRT, okay.
No, here though.
And,
this is kind of nightmare.
I need,
where is it?
Let's see.
There are some,
keys that doesn't work.
I'm not sure why.
This,
is too fast.
Okay.
Let's see if it works.
Oh yeah, it worked.
So, authorization,
var,
dollar sign,
forgot the name of this.
Okay.
Flash,
X,
get,
and here we go,
HTTPS,
this is it.
And,
and this,
this is,
slash, API.
Okay.
Okay.
Yeah.
Okay.
So, let's try again.
Oh, it worked.
This,
wrong, this,
yes.
So,
and slash.
This is like internal resolution of the server.
Yeah.
It's showing us, I mean,
we are just seeing here like this API,
and the clients, the server address,
but now we need to see what else we can do.
So, for example, I mean, the API is this,
where we want to store is this.
So, I can do,
all the objects that are here,
we can try to see them,
API core,
service,
in fact, let me,
I mean, the most valuable scene,
or one of the most valuable scenes that I may find
in our Kubernetes cluster in the API are the secrets.
So, let's see, where are they?
Here.
Secrets.
So, it's API,
API version one,
kind of secret.
That's like the path.
So, it was, let's try.
API version one, secrets.
It's forbidden.
So, it's forbidden.
So, the scope of this token
doesn't allow, allows us to list these resource.
So, actually, when you create a service account,
you have like these verbs,
which is what we can do.
So, list, read, write, stuff like that is what you can do.
Let's try another one.
So, again, I can go here.
Okay, let's try to find, for example,
the, where are they?
Okay, just resources.
The namespaces.
So, it's version one namespace.
So,
API version one,
namespaces.
Oh, it's forbidden as well.
So,
no, wait.
Okay, let's try this namespace.
Our namespace was
namespace.
It's here.
Where are, secrets, pyramids, namespace.
It's written one note.
Okay, API version one namespaces, namespace.
Probably, we know the secret name we'll be able to.
Yes, that's true.
We're going to write it.
Use one on all leads.
Pick one on all leads and
the secrets is actually what we were talking about.
Okay, so, our, the scope of this token,
it's confined to the namespace.
So, we can see, okay, let's see what,
what stuff we can see from this namespace
and the secrets will be interesting to know.
So, here we'll define,
for example, this webhook API key.
So, we were able to get an API key.
And also, it's born with the API key.
Yeah, the bold API key, which is,
I mean, if I have the common way to encrypt a secret,
is to store that in a vault.
But if I am able to get the vault API key access,
I may have a problem there.
And let's see another one thing here.
The bugs, for example.
So, it shows us the bugs that we have.
Actually, we can't prettify this with 5jq.
If you want to install,
ah, sorry, forget about it.
We are inside the container, I forgot.
So, there is, no.
Let's see what else we can see, services, yeah.
So, basically, we can do a lot of stuff
inside this namespace in this particular case.
Yeah.
Now, let's go back to the secrets and let's do this.
It turns out, are you familiar with this?
Like this.
Nice and simple.
Yeah, with this codification.
So, this is,
this is a base64 codification,
which is the encoding standard
for secrets in Kubernetes.
So, that is not an encryption, but that's a codification.
So, it goes in both sense, yes, in both directions.
So, with this common, I'm able to decode the coding.
So, I, but we are able to get the,
in this case, the key.
And they were put at the key
and I can do the same with this.
Although, I think it's the same.
Oh yeah, it's the same, I don't know.
Oh, it's different.
It's a different, yeah.
Yeah, this is, yeah.
So, there it goes.
So, we have the,
we have the what?
The secret decoded.
So, for example, if the secret were the key of a certificate,
I mean, the secret, you will have the key, the CRT.
So, you will have the certificates available
for you to work with.
Okay, so let's move on.
So, we've got the lab for the image.
We have the lab for the privileges.
We have the lab for the networking policies.
We just have lab for the well-reserved access control.
So, let's do,
let's do, let's work with, let's start.
I'm going to start with the network policies.
And we are going to start building on top of that.
Let me look for that.
So, network policies.
So, the network policy.
Okay, well, let's start saying this.
We already established that the network
inside a Kubernetes cluster is flat
and that you can jump from one container,
which is in a specific namespace,
to another container, which is in another namespace.
I mean, at least you have connectivity.
So, you can, for example, what we did
within the database that was in other namespace
from the namespace I was running my containers.
So, everyone is able to see with each other.
So, let's analyze that a little bit.
If I have a cluster, and in this cluster,
I have a development environment,
a production environment, a QI environment
in the same cluster, and these namespaces,
or we are using the standard network for Kubernetes,
we will face the issue that, for example,
production gets hacked, gets compromised,
and or development gets compromised.
Perhaps it's easier, or QI, it's easier,
it's exposed or something, and from there,
they can jump to production or access production
or access data, which is in production.
So, that it's the worst case scenario.
Or within the same company, if it's a big company,
perhaps someone can access to a different namespace
with different information that you don't really want
for them to access.
So, we need to implement network policies.
That is the reason why I am starting here,
because it's the most effective way
to start securing your Kubernetes cluster.
This policy behaves like a firewall.
So, it's a firewall that blocks the communication
between the pods.
It can be applied to pods within the same namespace.
I can apply using labels.
I can create ingress rules, egress rules,
and the traffic that goes to the pods,
or that exits the pods.
I can create a policy for a specific port,
but in order for this to work,
I will require to use our container network
interface plugin in order to enforce these policies.
By default, they don't do anything
if I'm not using a plugin.
So, let's, well, let's advance a little bit.
So, this is a repo.
Actually, I'm going to take a repo to put this inside.
This has a lot of recipes for different network policies,
because, I mean, these policies varies a lot
depending on your application.
I mean, depending on the complexity of your application,
if you have an application with five, with 10,
with 100 microservices,
the simplest way to approach that,
which is the most common, is that within the namespace,
you don't apply any policy.
So, any pod, any service can communicate
with any other service on micro or container
within that namespace.
But perhaps that's not wise,
because it's possible that you have like your data backend,
and you have like some middleware,
and you have a frontend,
and if your frontend is able to connect
to the data backend directly,
I mean, there can be a data leak
if there is any vulnerability in your application.
So, and let's be clear with that.
Here with Kubernetes, you must assume
that your applications exposed to the internet
are vulnerable, and they can be compromised.
So, you need to act like if that were a fact,
but if that was a fact,
so that's the way, the mindset that you need to have, right?
So, in this example, or in this diagram,
what we see is that we have a frontend
with the application web,
and the external traffic is able to access here,
but is not able to access the other applications.
And the application is able to access
these other applications, these are our labels,
full and bar, but they are not able to communicate
with each other, this is like an idea.
So, there are some comments
that we are going to explore after that.
So, there are certain recommendations,
the ideas that you don't want to allow
all the traffic to come in,
you want to be selected in which traffic you want to receive,
you don't want to allow in-bus traffic between the pods,
the ideas that you specify which are the flows
that you want to see within your application.
This, you need to be careful here
because if we block Qt system, for example,
from other namespaces,
we won't be able to reach the API server, for example,
and we won't be able to work,
so we are going to block our cluster,
which is not what I want.
Another thing which is more advanced,
but it's important, is that you don't want to allow
your pods to access the internet.
And if you take a look, I mean,
why, I mean, if you design an application,
a service that is small enough to run in a container
which has only the basics,
does this application needs to access the internet
for anything?
It's like the question that you need to ask yourself.
The 90 X percent of times, the answer is no,
there is no need to access the internet at all.
So the idea is to block any outgoing traffic from your pods
only what is needed, for example,
to access the egress traffic that is going to another pod
to the backend, for example,
if it's different, that's allowed, or so there.
So how does egress matter?
I mean, as long as we are blocking the egress,
like no one should be able to access
or the traffic should come inside.
Even if egress is open, how does that matter?
Yeah, because actually the lab that we're going to do
deals exactly with that situation.
I restrict the egress, but if I have an egress,
I can, for example, install something on my container.
Yeah, so I mean, you restrict the egress
and you only allow for the frontend service
to be accessed by the users,
but the frontend has a vulnerability that's exploited.
With this vulnerability, they are able to make this service,
this container, to download something, a payload,
and execute it, which is exactly the way
that a compromise is done.
Exploit something and make it download a payload.
So this is the situation where you don't want
any egress traffic to available here.
So we are going to do that.
There are other approaches,
allow only egress within the cluster,
but not to the internet,
and certain pods that really need access to internet
allow them to work.
We can work with IP addresses, but it's hard to handle,
so it's better to use main spaces or labels
to help this stuff.
In the case of Qt system, it's dangerous.
I mean, you need to do it carefully,
because if you do it, you can block your cluster
for working, so it's important to take that
so your egress needs to be able to receive
external requests, must be able to access the DNS
to resolve the service discovery.
The DNS needs to go outside
if it's doing some forwarding requests.
The monitoring tools, QNets,
it's everything about monitoring the stuff that you have,
so which usually is from others,
so it will require that access.
If you're using operators like for backup or certificates
or something else, they will need probably to access that.
We're going to do this, but usually you need to restart
the service for the net policy to apply.
We need to do some tricks sometimes
to make it work with labels,
because not all the resources, the objects
are able to use that.
The API, handling the API server has its complexity,
and some plugins don't work well with the policies,
so you need to be careful.
We are going to use, there are several ones
where you are going to use Calico,
which is the one that is being used by Amazon,
Azure, Google, by every provider, so it's okay.
There is another way, which is to use the service mesh,
like EatsT, which is a higher level,
higher layer filter.
But what we are going to do, I think it's,
so we're going to do, to execute this lab.
So this lab, we're going to move to a different cluster,
we are going to leave both behind,
and in this other cluster, we have a cube system,
we have an Ingress controller, which is called Trophic,
it's similar to NGeeks, for example,
that is an alternative,
and we are going to have two names, two namespaces,
one called default, the other one called production,
default will have a web console,
which is like the ones that we have been using,
so we can access remotely when we work,
and we're going to see that this different namespace
is able to access production to this node SQL client,
to this MongoDB database directly,
and then we're going to start applying the Ingress policy,
see how it affects,
and this other namespace called production
has a frontend called node SQL client,
which is a web frontend for Mongo,
it's kind of beautiful,
and it has a MongoDB access as well.
And there is some other stuff around this,
which is the monitoring from a service specifically,
I mean, from a service is cropping these deployments
to gather information and graph the graph feed
and show us the information,
so we will check that as well.
So let's begin, let me check here, one second.
Yeah, so let's move here,
I'm just going to open a new tab,
let's go to the Netball directory,
here in KAN security labs Netball,
so here we have a folder with the network policies
that we are going to implement,
and let me check one thing,
do you want to go for a break?
You're fine.
You're fine, okay, so we can finish earlier then.
We are going to do this now and that's it.
Okay, ready, so,
right, okay, so these are the policies
that we are going to implement,
but before doing that,
in the cluster that we have,
we are using the full network,
so it's not, I mean, it's open
and it doesn't have the plugin that enforces the policies,
so we're going to check that.
So let's move to the other cluster with QCTX,
execute QCTX, and let's move to Netball, QCTX,
and Netball.
Okay, so we are in this cluster,
let's check first QCTL,
we have all main spaces,
oh no, it already shows the,
the what, the plugin, oh,
I installed, okay,
so this is the cluster, if you take a look,
it's kind of big,
I mean, if you, we can see the nodes,
this has the nodes, the nodes,
the worker and the control plane,
and it has, let's see,
let's see the ingress, the ingress,
all main spaces,
oh, it doesn't have the traffic installed yet,
let me check one second, please.
Okay, so we need to execute a script that we are missing,
so let's go to this directory,
so let's make sure,
yeah, okay,
and we need to go to the Netball policies directory,
and there is this apply script,
this script, what this script is going to do
is to deploy this, plus the Prometheus
and the monitoring stuff, the traffic,
the web console, all of this,
which is not installed yet.
So yes, we are going to say yes,
we take away,
no, we are having to set them too much,
let's see what is happening,
it seems like I've been seeing an error here,
let's see,
let me do this,
and there is some issue with the network,
oh, I didn't read that,
it's running the same thing over and over again.
Yeah, it's, but it's not, let me see,
because perhaps I did miss the,
the network installation,
ah, finally, I'm not sure what it was because of the JQ,
did you control C then installation or?
Yeah, we run again, the apply,
it was because of the JQ,
okay, so it's good,
okay, so let's see, okay, let's see what we have,
and let's go to the,
you should have a keyword net dashboard,
Firefox, we will need to get the secret,
execute this comment,
so you can have your token,
okay,
so you can log in here,
oh, it's not working,
let's create a new one,
I don't know, you know what,
I'm missing the QCTO proxy,
come on,
ah, it's running from the previous one,
yeah, yeah, yeah, so we need to stop it,
and run it again,
and reload this because this was pointing
to the other cluster,
okay, so,
that's okay, so,
okay, so we have the web console,
we're going to see it,
we're gonna view the services,
and deployments,
we're gonna show the web console,
okay, we're just seeing the deployment space,
in this is the traffic,
which is the dashboard,
which is the dashboard for the Ingress controller,
the web console,
the from a TV server,
the non SQL client,
which are the deployments,
these are the deployments,
and the services,
and everything,
so,
what we are going to do is the following,
so,
the Ingress controller,
the traffic is giving us access,
to each one of those services,
and by their names,
so,
we are going to try,
let me check here,
and first,
it shows traffic as one of,
I used to,
so,
it's traffic,
HTTP,
okay, so,
it asks for user and password,
it's admin,
one to five,
one to,
sorry, what is that?
One to five,
one to three, four, five.
Okay, okay,
gone, gone,
just give me one sec,
I'll get some. Yeah, sure.
So,
that,
so,
sorry, what did I need to do?
Go to Firefox,
and traffic,
HTTP traffic,
no,
no, no, no,
don't use the IP address,
because
the Ingress controller is handling the names,
redirecting by name.
Oh, okay,
should I think,
traffic dashboard or just traffic?
Just traffic,
HTTP?
Yeah.
I don't like that,
I think it's just too bad.
Yeah, because it goes to Google if you don't type.
Okay.
Okay, so,
okay, so,
what we are seeing here,
it's the,
the,
the dashboard for traffic,
it show us the finance,
which are the ones accessible on the internet.
So, the promotees,
the traffic present,
the web console,
we're going to go there,
and the backends,
which are not accessible,
the no SQL client,
well,
there is a frontend for no SQL client,
promotees,
traffic,
web console,
the backends.
It show us the health of the,
of the ingress controller,
the errors,
the amount of errors,
all that.
It's super,
super basic.
So, let's open now the promotees,
HTTP,
promotees.
Admin one to five,
now let's open the no SQL client,
me,
admin one to two,
four, five.
And the web console will be the other one.
But just to remember,
we have,
we are here outside,
we have the traffic,
which is the ingress controller,
and what we are doing is,
if you take a look at the,
at the EPC host file,
it's all the same IP address, right?
Yeah.
So this is the,
this is the ingress IP address,
which is handled by traffic,
and when the request says web console,
it sends the traffic to the main space,
before with the web console application.
And if no SQL client,
it shows this.
And that's it.
So,
where do we see that these are all
different namespaces?
Yeah.
So, if we take a look in the Kubernetes,
we are going to see that,
for example, Prumirz use,
it's in the monitoring namespace,
Mongo and NoSQL are in production,
namespace,
and web console is in default,
and traffic is in system, yeah?
So this web console,
we can log in there,
with admin, one to five.
So,
this, this doesn't have either pinging,
it has call.
So,
we're going to do,
so this again,
we're here,
the namespace default.
Yeah?
Yeah.
And this is NoSQL client,
which is in this namespace,
is this one.
The NoSQL client can connect to MongoDB,
to,
well, as it's data I can.
So we're going to connect here,
just to check.
It show us the database, MongoDB,
and we can connect there.
So, it show us like the information,
and if we want to see the collections of databases,
to check what is in here.
So we execute,
and it shows us the array,
and we can explore,
it's hard to read it,
so the Star Wars characters, right?
So, and this is from it views,
and let's,
let's have some information here.
So let's,
let's ask for me this to show us the MongoDB connections.
So if we execute that,
it will show us like information
that it has gathered from,
from, from,
and it can graph that.
And I mean,
there is a lot of information here,
that we can execute,
yeah,
for everything.
I make sure there is something for the web,
also.
For our purposes,
what we're going to play with is with MongoDB.
Okay.
Okay, so what we're seeing here is that,
at least until this moment,
no obscure client,
this one has,
is able to see this one,
and this one is able to see everyone, yep?
So let's try the following.
From web console,
I'm going to,
to access other services.
So if you take a look
in our installation,
it seems that as there is so much,
garbage, I think that we need that.
Yeah.
Let me show you something.
We can see that here.
So let's go to the dashboard,
and,
and, and, and,
let me see.
Oh yeah.
So this is from a few server,
and we're looking for the host name.
I want to show you,
but I can't see it.
Well, we will need to build the name.
This is the, that's in our base dashboard.
I mean, it's not that good.
So anyway,
with call, which is here,
we can access
that from this server.
So the plumbing name,
main space monitoring,
and the rest is standard service,
cluster, local.
So if I execute a call,
well, it says,
just rough found so I can rough.
So the ESG is that we are testing connection
between,
between the main space default,
and our friends are the main space monitoring.
Now let's try with the,
with the QD system main space.
So again, with call,
epic.
Let me see it here, which one is it.
So in deployments,
there is,
I didn't see it.
Okay, it's here.
This is traffic from ethics,
which is the, like the service exposed to,
in order to, for promise you to scrub the metrics
that it needs to grab and monitor all the stuff.
So this is traffic from this QD system is the,
is the place,
the main space.
So traffic,
dash,
dash system,
service,
master,
local,
and the port here is 9100.
And let's check here.
You can see here.
Okay.
Let's see it like this.
There is nothing fun.
So we go to metrics, which is the,
and these are,
these are all the metrics that from it use the scrubs
and saves.
Actually we can take a look here,
traffic.
It shows us these are the,
actually that's, we can see how,
how it works.
So we have connected to do to,
from web console to,
to traffic and to monitoring.
We are missing production.
So let's try connect to production.
So in order to do that,
we are going to connect to the database directly.
So we are going to rewind here.
So in order to do that,
we are going to install some software.
So the Mongo plan.
So I'm going to paste the,
the comments here for you.
With control B,
it's able to,
to find the place to kill.
Yeah.
For something that I just,
So,
it's,
it's very good.
With control B.
Okay.
Perfect.
It works for you too.
And now we are going to,
we are going to,
to execute.
To move,
move the binary,
to MP.
And now we're going to execute this,
Mongo client,
to access the database.
Okay.
Okay.
And it works because we are seeing the R2D2
and all the,
all the objects,
the JSON objects that are stored there.
Okay.
Okay. Perfect.
So,
okay.
So now we have established it,
that we do have connection with all the,
spaces.
So we are ready to create our first OEC.
So let's go here.
So,
let's go again to the,
because we are going to modify the files.
So the best thing that we can do is go to,
to see and copy the network policies
from our labs directory.
So K8S security,
pass,
and network,
network policies.
Let me copy this for you.
And go inside that directory,
and open the Visual Studio Code,
all the space that.
Okay.
We are the last, yes.
Perfect.
Okay. Awesome.
So let's open the file,
Ingress Production Deny, Deny Out.
So this is our local,
super simple.
It's,
it's the,
okay.
The API version networking version one,
the kind of object network policy.
The native divider is the main,
the main space,
which is being applied this policy.
The specification is this one,
pod selector and Ingress.
So what this,
this means that it will choose any pod selector
that matches this,
this filter, this expression.
And as the expression is blank, is empty,
it will match everything.
Yeah.
And Ingress is the same.
This will be like the rules that allow traffic,
the traffic to reach the pods selected for these,
for this pod.
Actually, I just say in the wrong way.
This is empty, so there won't be any pods.
And these rules are empty.
So there are no,
there is not going to,
there is not any,
there are, there is no rule
that allows the traffic to enter.
Yeah.
So this, this,
this policy, what it does at the end
is to block the traffic
that comes inside the production space.
So we're going,
inside the Visual Studio Code,
go to terminal and open any terminal.
We're going to do it from there.
We're going to apply that policy from there.
So it's,
it's easier to see what we are doing.
So we keep CTL, apply.
Oh, before applying,
let's execute get netpol.
All name and spaces.
But there is, there is no network policy yet.
There are no network policies yet.
And now we are going to apply this policy.
Apply.
Touch F,
one in this production,
the netpol.
And let's see, it should show up.
Yeah, it's there.
Mm-hmm.
It doesn't have out.
It's one dash in this production.
No, I wasn't in the correct.
Ah, yeah, okay.
That is right.
Actually, yeah.
Awesome.
And run again the get netpol.
And it should be there.
Let's go to the dashboard and see.
Here's there.
So in the dashboard,
at the end, you will see network policies in cluster.
Oh, I lost the pink one.
Yeah, so network policies.
Go to clusters, network policies.
Yeah.
And go inside,
and it doesn't show much.
You can just see that.
And let's try.
Let's try to see.
Let's go to traffic.
Let's reload.
So it doesn't seem,
it seems that,
it seems the same as before,
from here at least.
Let's go to no-sql-client,
and check if you can execute query.
So it kind of gets stuck.
So let's reload no-sql-client.
And it shows that it's not connected.
Reload it in webpage.
Okay.
Try to connect again.
Yeah.
So it gets stuck,
and it shows a gateway timeout.
Yeah.
Click on the connect button at the top right.
Okay, yeah, we got it.
And if we go to Prometheus,
and try to execute the Mongo connections,
let's see.
Well, this is the same with Prometheus,
that it needs to be restarted, I think.
To take into effect this,
let's try again.
I mean, Prometheus,
it's not respecting the policy yet.
So we can restart it,
just to force the,
it applies its respect there.
There are others, so let me restart it.
Let's see if it applies.
It takes a while sometimes.
Okay.
And in the meantime,
let's go to our web console.
And let's try to connect again to the database.
We have the, in the Q3, we have the points, so.
We just need to.
Okay.
So it should show a connection file.
Yeah, should we try it on this again?
Yeah, we can try.
No, not yet.
All right, it keeps.
Yeah, it's still really,
we may have to restart the other containers.
But the one that I want you to check
is from web console,
try to execute the query to the MongoDB.
Yeah, I tried that.
Okay, perfect.
And let's try the call MongoDB is in production.
And we, the policy that we applied was only to production.
So let's try with call.
The promulg is the traffic from this, it should work
as well as the.
And we can execute another call again,
the no-sql client as well.
From you, sir.
Yeah.
You can add slash metrics to that call.
Yeah.
So it shows you more information.
And let's try to access the no-client,
the no-sql client.
Yeah, it should be.
It should be something like phone, no-sql client,
production, service cluster local.
We didn't do this before, sir.
Yeah.
So.
So this is restricting everything
and so we can be less restrictive about
which parts can be small.
Yeah, exactly.
We are going to, well, we are not to finish today
all the focuses, but we are going to add a pin one.
Because right now, if you notice, we have,
I mean, our application is not working.
Because probably our application should be,
should be the idea is that we want
the application to be accessible.
Correct.
So actually let's do that.
So let's open the Ingress Production
Allow Effect No-Sql Client.
Because what is happening?
We create a policy that blocks
all incoming traffic to production.
So traffic is not able to reach no-sql client.
The web console is not able to access any of those.
And this is okay, actually.
This, we don't want web console to access here.
But we do want that traffic access.
Actually, we don't want traffic to access MongoDB.
We want traffic to access no-sql client, specifically.
Which is the front end, right?
Yeah.
So let's do that.
So the second policy, the Ingress Production
Allow Traffic No-Sql Client.
Let's take a look.
So the name, the namespace, and here it comes.
So the specification.
PodSelector match label no-sql client.
And if we take a look here at the dashboard,
it's PodSelector match label, right?
So PodSelector match label.
So this is our pod, no-sql client.
And you can see the labels, it's here.
The label is wrong, no-sql client.
Okay.
Yeah, so this is the way I match that pod.
Right?
Now, for Ingress, what you just asked,
which ports I'm going to allow to access.
So we want the port 3000, because we take a look
here in services, the no-sql client.
Let's see.
It's here, it's the port 3000.
Okay.
So when we didn't apply the Ingress.
It will allow all the ports, any port.
So wasn't that port 18?
Was that still 3000?
No, actually no, because what we are accessing here
is the tri-fix Ingress controller.
Okay.
And the tri-fix is redirecting that.
Got it.
Yeah, yeah, yeah, okay.
Understood.
And what else?
I think that's it.
And.
And from, okay, so now we are specifying
the traffic where it's coming from.
Because, I mean, we want that production
is able to receive incoming traffic,
but we only want that incoming traffic
to come from this application traffic,
which is the namespace system.
So we say here, from namespace selector,
match label namespace to system.
So if we go here to the dashboard to namespaces,
I think here is the label namespace to the system, right?
So just imagine namespace,
or label developing or QR production or whatever.
So the labels are like a team to manage
the all these kind of policies and rules.
And once we select the namespace,
we select which port we want.
In this case is traffic, app traffic.
So we go there, app port, port.
So I think it's somewhere around here.
It's here, traffic, app traffic, right?
Yeah.
And that's it.
That's it.
So let's apply this.
This is once we get the port we can check,
we see that it's one.
And let's try.
Let's see if this works for no SQL.
So will this override the denial that we had applied or?
It will override, but for the specific traffic
that comes from key system traffic
and wants to access production no SQL client.
In fact, if you reload, you will see that we can see again
the front end.
Apply the, apply the.
Now let's try to connect to MongoDB.
It's still not working, right?
Because we, I mean, we have still,
we still have the policy that denies
all incoming traffic to production.
We haven't cleared yet no SQL client,
the front end against the back end.
Let's try it from web console.
What do you think it should happen here on web console?
It should still have no connection.
That is right, because we didn't create any,
any rule changing that.
And this is actually something that we may want.
So if you try to execute the MongoDB client,
it will fail.
There no SQL client, it will fail as well.
It will time out.
So now we're going to apply,
I mean, we need our application to work, right?
So at least the basics.
So we're going to apply a policy that permits the traffic
from no SQL client to Mongo.
And this is the policy.
So again, the namespace will be production.
It will apply to MongoDB, to the pod selector MongoDB.
And it will use the MongoDB port for default.
And the original traffic will come from the pod selector,
no SQL client.
So this rule, it's allowing, it's, I mean,
we are creating a policy for the namespace production,
for the pod MongoDB, allowing ingress of traffic
to be directed to the port of MongoDB,
originated from the pod selector, no SQL client.
This is what this policy says, right?
So let's apply that.
Q, C, L, apply dash F.
And we're going to call Amixter.
And if we go and check here,
if we try to connect again, it should work.
It worked.
Yeah.
Oh, that's instant.
Yeah, amazing, right?
Okay, so let's wrap up for today.
Tomorrow we will finish this policies,
this network policies.
And move on to the security contacts.
So to work with those and with the role-based access
and the tools and all the other labs.
So do you have any question regarding
what we have discussed today?
And anything that you feel that needs to improve
or change or is the pace right?
No, the pace is right.
Yeah, I think it's just that would these nodes
be available somewhere towards the end of the session
if I want to go back and check it out.
Well, actually in the repo,
it's everything that we are doing.
So the idea is that you can clone that
and the PDF is there, the presentation as well.
So, and there is actually, now that you say that,
I will suggest you to do this,
what we are going to do here,
to save the history of everything that we have done so far.
So you can copy that afterwards and take it with you.
So let's exit, close all the consoles.
And in here, exit.
Because there is a bad thing that the servers
are automatically powered off later in the night.
Okay.
So that affects us.
So once you have done exiting all the terminals,
open up the console again
with history.
If you want history, you can save your history
wherever you want to save it.
Tomorrow we can take a...
Actually, I'm going to save it in the zip.
It saves the dates, so it works that way as well.
Okay.
Perfect.
So, Kiriti, have a good evening.
Enjoy your Australian weather.
And I'll see you tomorrow.
All right, thank you.
Thank you, have a great one.
You too, goodbye, have a good night.
Bye-bye.
on 2022-11-21
language: EN
WEBVTT
And it's true why my desktop shows four connected users.
Like what?
Okay, so let's say...
Okay, cool.
Yeah, and okay, so let's do a brief recap.
So we can align ourselves and...
Oh, let me start the recording.
So we can align ourselves and get the most out of this.
Okay, so what we did yesterday.
Yesterday, we explored like four different common scenarios in which Kubernetes is hacked by malicious actors.
So these four examples, which are not the only ones, there are a lot of other ones.
The first one was related to letting keys or sensitive information lying around in the image or in my application.
And the implications that they may have, right?
So the idea here was to have like tools.
First of all, like being aware of that issue in the sense that sometimes, actually a lot of times, it is like a notice.
So people are not really aware of that.
Developers are not really aware of that.
So this is the first thing.
And second, to see a few tools, one to two that may help analyzing this and finding this kind of situation.
Check the context.
We need to be a net full context, which is the cluster that we were working yesterday.
Perfect.
And let's do some checks, some basic checks.
Let's get all dash all namespaces.
Everything should be running and there should be not crash lookbacks, which is fairly common, maybe fairly common.
And it's a good idea to get some events.
To see how it is working.
And actually, this is the first thing I see.
I'm sorry.
There is an out of memory error.
I'm not sure.
This is some Node.js application, but I'm not sure which one would be.
I think that...
Okay, cool.
That's good.
Yeah, because this, I mean, there is no Node.js in Qnets for default.
But it's showing up here.
But if you don't have it, it's okay.
Cool.
This event, it's important because it shows you the history of what has happened.
However, it tends to mix the information, so you need to look for the last thing.
Because not necessarily the last one is the most recent one.
It may happen that there is something in a time window of seconds, but it's at the beginning.
So, yeah.
Okay, so everything is cool.
Let's see our policies.
All main spaces.
So, the ones that we have created.
And let's take a look on them.
Yeah, so there should be three of those.
And we can, let's describe those.
Let's see.
This is netball slash...
at the main space production.
This is one way to see the information.
But we can also execute the, not describe, but get.
Get the object.
And request output in Jaml.
To see the Jaml as well.
When you want to do a backup of your system, what you do is to export every object in Jaml format.
So, that's like a way to have a backup of your deployment and everything.
And, okay, so that's okay.
Okay, so let me...
We were...
We had these policies in this directory, if I recall.
Well, yeah, network policies.
So, let's open the Visual Studio Code, because it's easier to see them here.
And...
Okay, production.
Okay, so yeah, it's the network policy, the API version, the name, the namespace.
And here is where we provide the rules.
So, let's open our website.
So, we have the Tri-Ethic.
Tri-Ethic.
And the user is admin, 1, 2, 3, 4, 5.
Let's see.
We had Prometheus.
Prometheus.
Prometheus.
Again, admin, 1, 2, 3, 4, 5.
We have the web console.
Web console.
And we can log in.
Probably our...
Our... Oh, wow.
Why did he save the history?
Oh, that's good.
Yeah, because, I mean, there is a concept which is kind of important in Kubernetes.
I'm not sure if you're familiar with it, which is the stateful versus the stateless state of the containers, of the pods.
When it... Usually, the containers are stateless.
Any information... I mean, when you restart a container, any information is lost because it doesn't get saved on the container at all.
A stateful container, it saves the information.
So, in order to do that, it's required that the container has a volume assigned in which to save the information.
So, yeah, usually, I mean, the idea here is that you have...
I mean, you have like a storage provider and you have your containers in the whole cluster,
but each container can start in any node, not in a specific node.
So, this storage provider, it should be able to provide the same volume to a stateful container,
no matter which node is running that container.
So, it's external because each image of the container is stored locally at each one of the nodes.
Each one of the nodes has an image registry with all the things that...
Well, with all the image that your system is running.
So, that's important to have in mind.
And let's... Okay, I wanted to open the dashboard.
Let me split this in two.
I'm going to actually open a new one here.
Okay, yeah. So, kubectl proxy.
So, we can open up the dashboard.
This is going to be useful.
Dashboard. So, it's our one here.
And we will need the token.
So, the token here.
In your history with control-r, you type token and you will get the command from the history.
And copy here.
Okay, cool. Yeah.
So, what we are going to take a look at is our paths.
Oh, I was curious about this path.
Okay, so let's leave it like that.
And now let's open the node SQL client.
Okay, so we have three policies applied.
Let me look at them here.
Which is the deny all, the one that allows traffic to no SQL client.
And the one that allows traffic from no SQL client to Mongo.
Right, like a star.
So, this is where we can connect here.
We can connect here.
And if I try from web console, which is a namespace.
Default, we are not able to see that information.
So, if I try to run.
Okay, well, it saved the history, but not the other path.
This is fun.
There it is.
Oh, I'm not sure if this has permission.
Okay, yeah.
Oh yeah, we haven't blocked the address filter, which is something that we are going to do.
And this one.
Now we have the MongoDB, and we can try to execute this.
And it will fail, because there is no connection there.
Yeah, yeah, please.
Okay, yeah.
And we can try the...
The curl to promise is perfect.
Yeah, it doesn't connect.
Okay, perfect.
So, those are the rules.
So, the first one, the nice old here.
The second one, if you have them open in your code, in Visual Studio Code.
Applies to production.
Selects on the label no SQL client.
Defines the port.
And the origin of the traffic from Namestate and a specific application.
So, you may have multiple applications in that Namestate, but it only allows from the traffic application.
And from...
And from no SQL client to Mongo, the idea here is to create an Ingress policy,
a Ingress policy at the Namestate production, which applies to MongoDB,
allowing to Ingress traffic at the port MongoDB from with origin node,
a pod selector matching the label run no SQL client.
Yeah, so that is...
But I'm sorry, I didn't quite understand the question.
I mean, the...
Yeah, okay.
Yeah, it's like...
I mean, because you apply a policy, you apply a network policy to...
I mean, the policy is for the object which you want to control.
So, I want to receive traffic or allow traffic going out for this pod, which is in production.
And I'm telling him that it comes from here.
Actually, with this, what we are doing here is that for the Namestate production,
we are denying everything, everything for default.
And let me put this in order.
So, everything is denied.
Then for the same Namestate, I'm telling with the policy,
Okay, if we are talking about no SQL client,
we are going to allow him to receive traffic from QGIS system in this port
when it comes from this application.
Actually, we are going to do an exercise at the end,
in which we are going to create from scratch,
or you are going to create from scratch a policy
to see if we got the concept error.
And then the third one is going further down in the same Namestate
when the traffic is...
when we...
sorry, in the same Namestate, same Namestate,
when the pod matches this name, MongoDB,
we are allowing the traffic to enter if it's directed to the port,
and if it comes from no SQL client.
So, notice that the object is production, or the Namestrate.
Okay, so let's execute the four policies.
So, in this case, let's have a moment to see what this does.
What do you think that this policy does?
If you take a look.
Exactly, exactly.
Exactly, because we are, again, we are applying this rule to Namestrate production.
We are selecting within that Namestate the pod in which we want to apply this policy.
And the rule that we are creating is an English rule,
so it will accept traffic related to metrics from the Namestate monitoring
and originated from the server from use.
And let's take a look here because this is important.
I mean, there is certain limitations on the way that you can
individualize an object for these network policies.
So, you need to, let's, in monitoring,
where is monitoring?
Here.
So, we need to identify to have like the labels that we want to do.
And if you notice, there are a lot of pods for this committee.
So, which one is it?
The one that you want to work with.
So, in this case, it is the component server, this one.
Yeah, so.
Because you cannot refer, you cannot refer by the name of the pod.
Do you know why?
Can you guess why we cannot refer to the object?
Exactly, exactly, because whenever this dies,
it will start again, but with a different name.
Yeah, so the only option here is the labels.
The labels is the way to go here, to uniquely identify an object.
So, yeah.
So, Mongo, ta-ta-ta.
And before applying this, let's try again here.
Let's MongoDB connections.
Remember that yesterday it was still working because it's from a few.
I mean, those are the, like, the budgets that policies have.
So, in this case, you need to check if it's applying or not.
If it doesn't apply, you need to restart the deployment or the pod or the object.
So, in this case, because we restart the whole cluster from yesterday to today,
it's not working.
Effectively, it's not working.
So, we can try from, for example, bring some data from other place,
and it shows data.
So, for me, it's working.
But it just doesn't have connection with one.
Uh-huh.
So, okay, let's add that.
And production.
And there it goes.
It's there.
This one.
And let's see if it applies instantly or if we are going to have any issues.
It worked.
So, okay, we didn't have to fight with this feeling anymore.
And, okay, let's keep going.
Now, this one is different.
This one is different.
But before, let me analyze, let me explain something.
So, here we have the, like, this picture, our ingress controller, the default name is face,
the production name is face.
However, and let me see, I think that we discussed this here.
But now the thing is that our cluster has this namespace called cube system.
Cube system, for example, is where the API server resides.
Yeah?
It's where the EPCD resides.
It's where the cube proxy, everything resides, the DNS resides.
So, if a malicious actor gains access or gets access to one of these systems,
Okay.
So, this is our ingress cube.
So, here is the policy.
So, the first thing that it does, actually, let me point to something in this .jml file.
This one only has a kind object, an API version.
I mean, this is one object.
There is nothing else here.
However, if you take a look in the file, you will notice that there is this, like, separation.
And there is one object, kind network policy, and there is another object, kind network policy.
So, this .jml file, it's doing, like, first of all, denying, then allowing the rules.
So, this is like a complete firewall, something like that, rule in the system.
So, the free subject, the network policy chooses...
Oh, sorry, sorry, sorry.
Okay, thank you.
It's denying to the cube system namespace complete access.
I mean, all pods, any pods don't have any access.
Yeah?
After that, she allows the DNS, because, as I told you, it deserves its discovery.
So, we can do that exercise, I mean, to just deny this and see what happens, and then try to recover.
But I'm going to leave that to your imagination, but just to give an example.
Here, let me show you.
This frontend, the configuration that it has, it's MongoDB.
It doesn't use any IP or anything.
It uses MongoDB because MongoDB is the name of this pod, well, in the deployment.
The deployment, this deployment has all the stuff.
Where is it?
Oh, it depends how it's created.
MongoDB, this is dash zero, but the network name for it, it should be MongoDB.
Okay, here, let me show you.
Yeah, here, here, here.
This is MongoDB.
So, this MongoDB is like the name of the...
This is the full thread, the MongoDB.
It's the name used by the DNS resolver.
So, let me see if we have here...
Well, actually, we have this.
I mean, I noticed a container, it has this domain, like, for default.
As we are in a different namespace, our default is...
Default service cluster local.
So, this is why we are using the whole name here.
So, that will be important to keep in mind.
And so, this rule allows the DNS.
So, it selects in the namespace cube system the application to DNS, the UDP ports, UDP ports, and well, TCP ports as well.
Actually, yeah, I think that there was some change there because, for default, DNS is UDP.
But I think there was a change to use this as well for something that I don't recall right now.
And this is saying that this rule applies for any namespace.
I mean, because it's in white.
So, any namespace, it's allowed to access cube system, to query cube DNS as long as the port that the request is assignment to is the 52.
So, this is the second object there.
There is this three objects, three additional objects, which is the ingress rule.
This rule, it's allowing the access to the...
Okay, let me show you here.
We have traffic in the cube system application.
So, the first policy that we are applying is deny all.
So, if you deny everything, it won't be possible to access traffic, right?
You will kill the remote access.
So, this is why we have this rule, because we are saying here that in cube system, when we are discussing or talking about traffic, when the ports are 80 and 44 and 3, we are allowing from anything.
So, this helps and shows us what we are talking about.
This is, for example, with from, is where you can use an IP block or a nine-space selector or a port selector.
Those are like the different objects that you can use in this from.
When we leave this in blank with nothing, we are allowing all these kinds of objects to use this rule.
So, that's it.
So, this way, I mean, if we restrict this to a name-space or a port, we will kill the external access from the Internet.
And traffic is our front end to the Internet.
So, that's why we are doing this.
Yeah, it's the...
But, well, that is syntaxes because...
Yeah, yeah, yeah.
No, it's okay.
It's okay because this appears when you have an array of objects that you can add here, of different objects.
For example, port selector match labels versus English, which this dash is because this is like an object or a key of configuration.
Well, there is no more there.
Right.
So, yeah, that is right.
And we were...
Okay.
And the other one is for monitoring that we are allowing to keep systems to the traffic app when the port are directed to 19.1.0.
And the traffic is coming from monitoring and from the server.
We are going to allow that.
Yeah.
So, that's it.
So, this...
Actually, this rule is super important because this is the one that protects the most your APA server, your EPCD server.
So, this is key.
This is fine.
And here, let's apply it.
This goes in cube system.
So, let's see there.
And see that we have one, two, three, four, five...
Four. Four policies created.
And so, this...
If we try to test this...
To test this...
So, for example, here.
I run this curl accessing from this.
And it's up here.
So, let's try that again.
Well, let me think.
Because...
Let's write that.
Let's write that thing.
Because...
Yeah.
When we...
Okay, what we are saying here is that...
No, okay.
Yeah, that should work because what we are saying here is that...
If the namespace...
If the destination is traffic, it will allow traffic from monitoring.
But...
It should not allow anything from this...
Oh, I got confused.
Give me just a second, I think.
2222.
Because I'm seeing here from this server...
Ah.
It can be that it didn't apply.
It hasn't applied.
That can be...
Yeah, because...
I mean...
This web console is at the default namespace.
And we don't have any rules...
... beyond DNS.
So, we should not be able to see that traffic.
Ah, no, sorry.
Sorry, sorry.
No, it's okay.
This is fine because from its use, it's in wish namespace...
... monitoring.
So, if we take a look...
... to monitoring...
Monitoring...
There is no network policy.
So, it's okay.
Yeah.
The policies that we just applied are to work assigned to kube-system.
So, this is kube-system.
So, this should time out.
Which it seems it will do.
And this one should work because we haven't any policy applied to monitoring, right?
So, take note of that because...
I mean, if you have 10, 100, 200...
... namespaces...
So, you will have to create...
Probably you will need a massive policy that locks...
... every namespace in...
... ingress traffic, right?
And then you will need to apply one by one the rules.
So, it's kind of cumbersome.
It's taking a while.
So, now we will want to create that one.
I mean, we are going to create for monitoring the same rules.
I mean, we are going to block everything.
We are going to allow the traffic to access from it.
Kube-system traffic.
And that's it.
And, okay, this is another component.
The alert manager which needs to access.
And the monitoring.
Okay, all of these are the ones that are available here.
Yeah, time connection time out.
And let's see, monitoring.
The server, the kube-state metrics, the node exporter, the matured if needed.
The access.
But this is the ones that are required.
And these...
Applying these rules, we are going to be able to execute that call.
Let me see.
1993.
1990.
No, but we are not allowing that for that we are using here.
Because we are using which one is it?
Yeah.
So, okay, so I guess that it will fail as well.
1990.
1992.
I think that is all.
Actually, I was thinking why would we need that kind of access?
But in this sense, it's needed because...
Because the traffic here, it's exporting from Edis.
Yeah, from Edis.
It's publishing from Edis.
And let's see.
Because this is the traffic.
I mean, from the...
I mean, we need to allow from traffic.
I mean, as we are publishing the promedius and the Ingress controller,
we need to allow that traffic to come.
Otherwise, it won't be published.
This one, for example, is different.
This one is the alert manager.
And it needs to...
When the promedius needs to...
I mean, promedius is a monitoring system that has like two or three more components.
These ones are for the basic work of promedius.
And let's see here.
Okay, it's there.
And it should time out again, I guess.
Because...
Ah!
Why did I last...
Okay, no, it was in Tunis.
This is Kip system.
We don't have connecting there.
Well, actually, the rule that we created was for monitoring.
So, this one...
Let me stop here.
This one for here.
Okay, this one...
But it doesn't show any port.
Yeah, it's not allowed.
It should time out as well.
Yeah, in this case...
I mean, in this case, from default...
Yeah, we are denying everything.
And we are only allowing one by one.
So, there is nothing coming from default.
Yeah.
Okay, so those are the English rules.
Now, we are going more...
To a more complex situation.
And we are going to block the outgoing traffic.
And this outgoing traffic...
The idea here is to...
I mean...
Your application only needs to reply to...
It's like a firewall.
When you receive a packet, you reply to that packet.
But you don't originate any packet.
You only see, for example, DNS.
And that's how you do this.
So, in this case, what we are going here is to only allow the container, the pod,
to communicate with the API server.
I mean, the only traffic that we want to allow that goes out from our pod
is when it needs to talk with the API server.
Yeah.
And that's it.
So, in this case, the policy is called egress.
Yeah.
The type of the policy is egress.
So, we can see the differences.
So, okay.
The default is egress.
And we are explicitly...
We are explicitly here telling that it's egress.
And the key is egress instead of ingress.
Like this one.
Yeah.
So, it's egress.
And what we are saying here is to any name space.
Yeah.
So, we can use an IP block as well.
But that will require you to be extra careful about the range of your internal network.
I mean, within Kubernetes, you will find a network range or IP range for the nodes,
for the containers, for the services.
So, it's easy but complex at the same time because it's huge.
I mean, we are talking here that actually, for example, this cluster that we are working on right now has, I don't know,
30, 20 or 30 objects.
26.
Yeah.
And it's super.
I mean, it doesn't have anything.
It grows super fast.
So, usually we are using 16, slash 16 or even slash 8 address spaces.
So, it may be complicated to handle like that.
So, in this case, it's going to...
It doesn't send any traffic to any other name space.
It only allows internal traffic here.
I mean, okay, I say it in the wrong sense.
Here, we are saying that the only traffic, I mean, it applies to all name space.
So, when the traffic is going out, it applies to all name space and nothing else.
I mean, if the traffic goes to another name space, it goes out.
It's what they are selling.
And this was, let me see, this is for default, for the default name space.
And we have our web console there, right?
And the other is for production.
Production is the same.
Yeah, so it's...
I mean, you need to click, click because it's sometimes not easy to read.
But yeah, if this applies to default, yeah.
That is...
Okay, actually, because we are controlling the ingress.
So, I mean, we are controlling the ingress.
So, here, allows internal traffic.
That's a really good question.
We are allowing the traffic go to the other name spaces.
But if we have ingress rules on those, those will apply, yeah.
But yeah, yeah, yeah, it's valid.
Let me see this.
Oh, there it has...
Oh yeah, this is for promenade.
So...
Oh, let me see.
Am I reading this wrong?
Okay, this is the network policy.
Applies to default.
When there is not going traffic.
Any pod within default.
When it goes to any other name space, yeah, that's true.
But actually, what is happening here is that if you notice,
this is specifying name spaces.
So, what will happen here is that the pod doesn't have any access to the internet, for example.
It won't be able to access that.
So, give me just a second.
Give me just a second.
I need to check one thing here.
Yeah, yeah, yeah.
It's not actually...
This rule is...
What is doing this rule is to limit the outgoing traffic to inside the cluster, right?
And taking into account the ingress rules from the rest of the clusters.
Because what is the idea behind this?
Your application was compromised.
And one of the first things that they do when they compromise an application
is try to download some payload in order to gain access and work from there.
So, with this rule, what we are doing is to block that petition, that request.
So, it may be compromised, but they weren't able to do nothing.
Either the internet, or for example, it was compromised because the application needs to access the internet, whatever.
But it won't be able to access the corporate network.
For example, in the application, you were able to have access to the IP address of the database, the user and password.
So, you are going to try to dump the data and do that.
So, they won't be able to do that.
Just like an example.
Other things that you can only need...
I mean, I want, for example, to allow the traffic to the active directory for authentication purposes.
So, I only allow that kind of traffic to go.
And this is the solution.
So, let's apply that rule.
Yeah, so these are our rules so far.
So, this is the one for the default.
Yeah, so it only allows internal and we can check that.
So, if we go again to our web console.
And we try to download the MongoDB file.
Again, it will give us a timeout error.
It should give us a timeout error.
Exactly.
I mean, because this is not traffic related to another name.
It's related to...
I love my platform.
It's related to the Internet.
But let's try...
Yeah, so...
I thought that we had another...
Another system error.
But no, I don't see it.
So...
It should...
It should go back.
And...
And, and, and...
Where were we?
Ah, it's here.
Okay.
So, now we are going to...
To block...
To block...
Now we are going to be a little bit more aggressive.
So, for this one, it's for monitoring.
Let me check.
Yeah, this is for other namespaces.
So, what we will want to do here...
I mean, we are allowing...
This supplies for monitoring, address, all the pod selectors to name selector.
So, we are restricting the outgoing traffic...
No.
Yeah, restricting the outgoing traffic to only other namespaces within the cluster.
Those internal traffic.
This one...
It's for allowing access to the API server.
API server.
So, we will need to get this IP address.
In order to work, we are going to take a look for that.
This is the other component.
This is monitoring as well for the key state.
This one is for the node exporter.
I mean, this is all the monitoring infrastructure.
This is the key system.
Again, only internal traffic.
This one is for the ingress controller to access the API server.
So, we will need to update this.
And in case that you need DNS resolution...
I mean, the QDNS to be able to forward or to resolve external internet addresses or names or DNS domain names.
We are allowing that here.
Yeah.
So, let's find the IP address of the server.
So, let's see how we can get that.
I mean, there are a lot of ways.
There is this object at endpoints.
And...
The default.
So, this is the QBornet endpoint.
This is the API server.
So, we can take a look.
Okay.
This is...
I mean, we need not talking to access here.
So, we won't be able to explore the location.
Yeah, obviously.
So, default QBornet.
No, because...
Okay.
What you mean is why is in the defaulting space?
Yeah, because...
Yeah, because the Qube system is this.
Yeah, actually, that's...
I haven't...
I couldn't talk about it because...
I mean, there is no...
The API server is running in the...
In the what?
In the Qube system.
Actually, yeah, that's...
I never talked about it.
But I'm going to answer this.
So, let's see.
I just look for that.
Okay.
Why QBornet service is in the defaulting space?
Yeah, this is the API server.
But let's see if they explain why.
Yeah.
The QAPI pod is running in Qube system.
Why the API service is in the main space?
Okay.
So, it seems that it was because they wanted...
I'm not sure why.
Actually, this is the first time I realized that.
Because actually, let me see.
Get service in the default.
Oh, yeah.
Oh, yeah.
Actually, the service is published here.
The API service.
This is the internal API address of the QBornet.
So, yeah.
The pod is running in Qube system.
And the API is exposed in the default.
Okay.
That's something.
Odd.
I'm going to think about during the night.
Because actually, I'm thinking that, okay, but the traffic, how it will go there now.
So, okay.
Let's ignore that for the moment.
And update.
Here, we need to give this IP address.
And I think that we have that other place here.
Here.
Uh-huh.
And that's it.
And we save that.
And yeah, let's apply this.
And this is the 8, right?
Uh-huh.
Okay. Perfect.
So, we created 1, 2, 3, 4, 5.
7, yeah.
And how can we try this?
Oh, actually, as long as our cluster doesn't die, we are good.
I guess.
I guess because what we can do, perhaps, is to do, let's see, calls.
I'm not sure.
Because this is, I mean, this is the, like, the public API address, not the internal one.
The internal one is the...
Give me just a second, please.
Hello.
¿De qué estás hablando?
Ah, que van a venir mañana.
Ay, Dios mío, ¿y qué dijiste?
Se me olvidó decirte, siempre se me olvidó.
¡Ay, madre! ¿Cómo dices eso?
Ay, Dios.
Mira, estoy muy ocupado ahorita.
Hablamos ya, ya, ya, ni modo, ya lo que sea.
Chao, mamita.
Estoy muy ocupado.
Okay, yeah.
So, we are here again.
And let's see, let's see, let's see, let's see what else we need to do here.
Um...
Okay.
Yeah, this, I think this is, this is not why we are thinking...
Ah, we can do print environment.
And here it's, yeah, this is Kubernetes.
This is the actual API address for the service within the cluster.
Yeah, 10.9601.
Okay.
Are we not allowing the traffic or what?
Let's see if we can...
Let's try this.
Let's change this because...
Let's see what happens.
We are...
Okay, we are applying...
Let's see, let's see, let's see, let's see.
This is for monitoring.
So, it doesn't apply that one that we have done.
This is for monitoring as well.
This is for monitoring.
Ah, okay.
This is what is happening.
In the previous role, rule, we, for default...
The ANO, we allow the internal traffic here from namespace,
from default when it goes to any other namespace.
So, it should work.
And this is for production.
So, it doesn't apply here.
This one, this one, ah, man.
This one is from monitoring, so it doesn't apply.
This is for monitoring.
This is for monitoring.
It doesn't apply.
This is QVSystem.
It doesn't apply.
This is QVSystem.
This is QVSystem.
Yeah, yeah, let's see, let's see, let's see here.
In-native policies.
Ah, here, which one we have for default.
We only have address, allow internal only.
What are we doing wrong here?
Well, it's not necessarily wrong.
Perhaps we are misunderstanding this.
We will...
Because otherwise...
For example, we won't be able to do any of this.
Well, whatever.
And let's see...
Let's see API.
Yeah, we have access here to the API.
So, it's not...
What else we have here?
Okay, let's do the next thing, which is to...
In traffic, which one was the rule that we created here?
For traffic.
Oh, this is the DNS.
To this network.
How about in traffic?
We don't have any access here.
Ah, let's see.
We don't have any rules here.
Okay, okay.
So, let's try to do this.
Let's try to create a rule.
Let's create a simple rule.
Let's deploy a new application.
So, we can...
Let's create...
We're going to create a new namespace.
Let's call it dev.
Oops.
Create new space.
Dev.
And let's deploy that namespace.
An engine server.
So, let's do it like that.
Looking for the YAML file.
I'm going to create the files here.
And I'll paste it here.
So...
This will be the engine server.
Engine app.
Deployment. Engine deployment.
This is more complex than I expected, but...
I guess that we can work with it.
Let's see if we can...
If not, we just run one and that's it.
We have...
Yeah, it's here.
It's here.
Ah, but it went to the default namespace.
So...
Let me do it again.
Let me do it again.
So, the rules that we already have are there.
This is not what I want.
So, let's remove that.
Okay, yeah.
Do it like that.
So, let's do something more simple.
Okay.
So, let's just run an engine.
And...
Add image equals engines.
And...
And the namespace that...
Right.
And keepctl.
And get of.
And that.
Oh my God.
So, that's it.
So, we do have here our pot.
And let's connect to that pot.
And so, exec.
And pod engines.
And bash-dash.
Bash.
I'm sure the bash is there.
Oh, don't tell me that it's dying.
Ah, I missed the...
The namespace.
Yeah.
We can have to be reformed here.
Okay.
No worries.
No worries.
I mean, it's super common to forget about the namespace.
You create the...
The pot or the deployment where you don't want it to be.
Actually, I...
Between among all the free overnets distributions out there,
I prefer OpenShift when it's like on-premises.
Or...
Azure, the AKS from Azure.
But that works pretty good as well.
I like it a lot.
Okay, we are here.
So, this doesn't have any...
Yeah, the only thing that we have here is curl.
So, let's try first to see if we have access to the Internet.
It's not...
Oh, yeah, it has...
It was like slow.
Yeah.
And let's see if we are able to...
Let me see what other...
Come on, we do have it in DC.
No, the only one is curl.
But that works.
Yeah, that's our public IP address.
Okay, so let's try to see...
That will be traffic to the outside world.
So, we are going to create a rule that...
Or a policy that locks that traffic.
And let's do another one.
Which is...
Let's see what we have here.
I guess that we can use...
Like the...
For example, the one for...
Or the one that we were using here as well.
The access to production, but no SQL client.
So, let me check the name.
That will be...
Okay, this will be no SQL client.
That production.
That service.
That local.
So, no SQL client.
Actually, I guess that this will fail because we have the rule in no SQL client.
To deny everything, right?
So, this should not work.
So, no SQL client, production, service, plus store local.
I think that's it.
Yeah, perfect.
It is going to give us a timeout.
Yeah, let's try that.
Let's try that.
Come again, sorry.
Okay.
Yes.
Yeah.
Let's try.
Uh-huh, uh-huh, uh-huh.
You're right.
Let's see which one will be the right...
The right API server.
Let's see services.
Ah, Dios mío.
I mean, because this...
I know.
I think that it's Kubernetes, the Kubernetes service host.
This is the one that we should use.
The variable.
But, well, okay, this is...
Let's try first here.
Okay.
And let's try with the IP address because I got confused
because I'm thinking that the manual is wrong
because this is like the public IP.
This is the file or tree.
Oh, yeah.
Ah, 28.
28.
28.
Yeah, but 28.
Is it 28?
Wait, perhaps.
Ah, yeah, it's 18.
So perhaps that was the error that we had.
No, it's 18.
Okay, it's here.
Yeah, this.
I mean, that means that the API server is replying.
And let's try again.
This is connection refused.
Yeah, I mean, we should like...
I mean, this is like the outside IP address.
That's why I'm confused.
Okay, so...
Yeah.
Okay, no, but it only applies to this.
Ah, okay, for this one, for the traffic.
So let's do this.
We're going to block the egress traffic for this name space.
So this is going to fail.
This is supposed to fail.
And we are going to enable the access from this name space,
the NGIC, to no SQL client.
Yeah?
So let's work on that.
I'm going to...
To what?
To do it outside the system.
I'm going to create a rule.
Let me open.
Okay, I'm going to just...
We may need to get inspired by these policies.
We are not...
I mean, we are not going to deny all the incoming traffic.
Just that punctual situation.
To try to grab the concept of data.
So that's why we are not using this one.
So let me publish your things.
And there is a...
I'm going to structure this thing.
There is a...
In Spanish, we say cascarita.
I'm not sure how you would say that in English.
Like, there is a trick here that you need to think about.
Okay, you are super smart because that is the trick.
That is the cascarita.
That is very good.
Actually, yeah, you can do it using the dashboard.
You can add a label there with metadata.
I mean, if you...
Let me show you.
So you can go to the pot.
Go to the pot and see if it has any label already.
Run and click.
So you can use that.
Or you can create a new one here.
So for example, app.
This is wherever you want it to be.
Web, wherever.
Or you can execute that with QB7L.
So you can...
Let me open here.
Okay, so you can...
With QCTL get pot show labels in our namespace.
So you can see the labels here.
And if you want to label, you execute QCTL label pot.
The name of the pot in this case.
And the label that you want to apply.
Train.
Train.
Train.
KS.
For example.
What happened here?
Ah, sorry.
I leave it.
What is the name of the pot?
Ah, I'm missing the...
What? We always miss.
Yeah, perfect.
And that's it.
Yeah.
So you can use wherever you want to use.
And I think I told you already that this is like something that needs to be
think about when you are going with your system, the labels.
Because those are like the way to handle and apply policies.
And do a lot of stuff here.
Yeah, you can create.
I mean, there is no...
So let's see.
Do you have it?
Let's see your solution.
So if you want to...
Oh, I can see.
I can see in the training room.
So let's see there.
So how did you put it together?
Did you create one file?
Only one file?
For everything?
Okay, no worries.
Actually, we have like the...
So that's the one that allows the ingress from there.
The web engine, directed to production, no SQL client.
Right.
And the egress one?
Okay.
Yeah, it's to block the...
I mean, to allow only internal traffic.
Yeah.
Okay, so you are applying a policy to the dev name in space.
It's egress, egress, and to all the bots inside.
And to any other namespace.
Yeah, it's there.
I think it's there.
So actually...
It's not allowing to what?
Ah!
Oh, wow.
Control S.
And it doesn't work.
Oh, fuck.
Try to copy the text and with being create a new file.
Ah, or is...
Ah, the code is...
You should start...
Let me...
Yeah, shit.
Okay.
Oh, man.
It's like everything got stuck.
Oh, fuck.
Hey.
Hey.
Hi, what should we do there?
Try...
How do you try to change the desktop?
Oh, yeah, yeah.
Ah.
Oh, okay, okay.
Okay, you were able to close the code.
Oh, that's good.
Uh-huh.
This is mine.
I haven't tested yet, but I guess that we...
It was actually was simple.
So the object in which we are applying the policy,
the address to any bot, to any namespace.
And for from production, up to production and no SQL.
The ingress traffic to 3000.
And dev and originated in namespace dev.
Actually, I think that we need to...
The code needs to be executed against this port,
because it won't work.
So I'm going to...
Copy this.
Yeah, because I mean, what...
You are applying a policy to one object,
the addresses for the dev object, the namespace,
and all that is inside that.
And the other rule of policy is being applied to the namespace production.
We are creating an exception in the ingress rule that is already there.
Okay, it was created.
Let's see if we can...
If we can access the outside world.
I think that it's fine because I was checking the...
I mean, like the rule is it's handling the port 3000.
But as we are accessing here the service that was published,
that service, that service,
I'm checking the no SQL claim service,
it goes to the 3000 port, so I think it's fine.
I mean, we don't need to use the port 3000 here, I guess.
It should work, yeah, because I mean,
that is something which is complex,
and you need to make your mind to keep it clear,
because I have my port, and the port has a port.
But when I expose the service within the cluster,
with a cluster IP, for example,
we are using not the port in the container,
but outside the container, so it's in the service,
so it's in the proxy, actually.
So it's the 80 here, 3000 for the IP that is there.
So it's like a liar in order to avoid the port to mess up with each other.
I'm going to send this to the background,
and let's try this another one.
It didn't work.
Yeah, the external worked.
Well, it seems that it worked.
But the internal, it seems that it didn't work.
Why?
Ah, I know this is not.
This is the one that I'm not sure.
And the origin is there.
Address, it's the same.
No, okay, it's fine,
because we are allowing the address to any name and space.
And this is for production to NoSQL client,
the port 2000.
Actually, it should work, I guess.
I guess that then we are pointing the wrong direction.
This is the time of the Sconf.
Because here in traffic, ah, okay, the traffic,
it's going here.
So let's try this same IP.
Let's try this.
But let me see here in the dashboard.
Okay, yeah, the same P.
Okay, the same point.
Oh, yeah, this is the one that is working 3000.
Let's try with that one.
So, right ground.
I should keep going.
This is the traffic.
What else are we missing?
We should be missing something.
And it depends on the system.
Okay, let's do something.
Let's go to our break, and then take a look at this.
We could, well, at least one we have.
But yeah, let's go to our break.
And let's see again.
1130.
Yeah, 1130.
So, I'll take a look in the meantime.
Yeah.
Okay.
Hi there.
So, it was, I mean, let me
share my screen again.
So, okay, we are here.
So, we missed something. I mean, when you take a look into the rule, into the policy,
here we are using metadata,
which is like incorporated in the object definition, right?
And, okay, this is all namespace selector.
Here in this one, again, we are using metadata.
But when we are selecting here in the spec,
for example, the selector, the pod selector, we are using match labels.
So, we need to have a label. We watched the trick, but we forgot the most important trick, which is
the namespace needs a label as well.
And we don't have any, right?
So, that's, well, I mean, this one warranty that it will work, but that's the reason.
And we can check here. Let's check.
I mean, we have these policies and we created this one,
which is the from there to no SQL client.
And let me, let me, let me select only the one that we want.
Production.
So, this is the one.
So, from there to no SQL client and the pod selector is this label.
And from traffic, which is the one that we are, we're basing our rule,
it has the pod selector. Let me see if we, oh, why we can have more information.
So, let's describe this policy.
Describe a net poll.
This one.
So, here it goes.
So, here it goes. Namespace selector is namespace equals kibsystem pod selector with traffic.
So, if we take a look to,
if we describe the namesystem kibsystem, we will see the label.
Yeah. But if we describe our dev, it doesn't have a label.
So, let's, so the policy is not, I mean, it didn't work.
It supposedly is not working. Supposedly is not working because of that.
So, in theory, if we are planning to love again keyword net, this should work.
Assign a label to the namespace dev.
And that label should be the one that we define it in the rule.
So, let me, this one, namespace dev, right?
So, it will be namespace and let me, namespace equals dev.
So, let's describe it again. Okay. It's there.
And it's here. It's the same. And if we describe the policy,
the policy that we created, that
it's here. So, in theory, it should work.
So, let's see if that's the case. If not, we just forget about it.
Oh, it worked.
It's, I mean, this is the, I told you this yesterday that
that keyword net is so complex that there are so many, many, many little pieces
that you need to get in there. So, did you apply that and see?
Oh, okay. You did it already. Okay. Cool. Okay. Cool.
Okay. So, let's move on now to the security context,
which is harder than this one. So, let's get deep inside that.
So, just a little remember. Yeah. Okay.
So, security context. Basically, what we do here is to define parameters
per pod, per container at runtime as well.
And they are like focused on the pod, for example.
These are like the basic recommendations that actually
this recommendation are not that follow it. I mean, usually you will find that
almost every, every pod is running as root. So, this is the one to not run as root.
It recommends to use user, to run as a user, like an anonymous user without privileges, privilege.
And the idea is not allow a user to escalate its privilege, like using sudo within the container.
For example, this another to have the file system read only.
Remember what we did in the, that we downloaded an application, the Mongo client,
and used it inside. So, if you had the read only file system, you will log that.
Well, this is the second profile, which is kind of like the same, like the equivalent from the Linux, but for containers.
And the capabilities that we kind of discussed when we did the, when we jumped to the host system
and deployed new containers, these are, those are the capabilities that we want to disable.
So, this, and the another one, which we tried the other day, is, I mean, when our pod doesn't need to communicate,
I mean, to not enable, to not have the service account available for default when you are not communicating with APIs, for example.
So, that means that that will be like something.
To enable Sector Comp, this is, it's like an annotation, which you define different profiles.
There is, I mean, there is different audits. Actually, let me show you a document.
A document here.
We are going to work with it.
So, this is the CIS benchmarks for QNets.
There is a new version from this year, but for some reason is not available to download yet.
So, this is like two years ago, but there is a new one that should be available to download.
So, these are like all the recommendations, and those are related to this HITCOMP profile that is here.
The idea is to run your containers as an unproven user, not as root.
We're going to do that. What happened?
I mean, there is a cost, and sometimes it's a huge cost when you apply this kind of policies or security policies.
Because, I mean, it's what we discussed yesterday about the Linux, that it gives you a lot of headaches trying to make this stuff work.
So, it's easier to allow the root to run the container and all the other stuff.
So, let me just second this to check if you can hear my stuff.
Okay, yeah.
And let's see.
So, let's create a folder of your directory.
Fixed context.
So, we can put stuff there.
And we are not going to change the context. We're going to paint the cure.
Let's do a check.
All namespaces.
We do have a dashboard here.
Okay.
So, if you look if we need it.
So, it's like the same cluster.
And it's okay.
And what else we need to prefer?
Okay.
Okay.
So, let's begin.
What we are going to do is to execute the, to execute, to create some general files that are already there.
But we are going to work with a few.
And okay, yeah.
I found what I was looking for.
So, give me a second.
So, this is, okay.
Okay.
So, the first thing that we are going to do is to create a context for our deployment.
And this will be the, a context not about the namespace, the sec context.
So, qctl.
Create namespace.
Okay.
And in order to have that for default, let me see.
Okay.
So, we are going to set, to work for default in this context.
So, we don't have that to remember that we need to add the namespace every time we do this stuff.
So, in order to do that, we are going to run this command, which is qctl set context.
Namespace, sec context.
So, what happened?
Yeah.
Give me just a second.
I'm going to copy this for you.
Oh, what happened?
Oh.
Ah, yeah, yeah, yeah.
I know what is.
Okay.
So, let me copy this for you.
This command, what this command does is to set our namespace for default.
So, we are going to work always in that namespace and we won't have the issues of forgetting.
So, we can run a get all and it shows us the set-ctl namespace, right?
Okay.
So, the first thing that we are going to do is create, well, to see what is run as roots.
We have done it a lot, but let's just do it.
We are going to run as root.
Qctl get pods.
So, that is our pod.
So, let's get the l exec the pod.
And dash, dash id.
So, it shows that this is the id that we got.
And let's, I'm going to split this terminal.
So, we can go to the repo.
To lab, to speccon.
And here we will find the security context.
We will find some jamf files that we may want to work with.
And we can open visual studio codes here.
So, we can take a look at those jamf files.
So, the first one is, we have this engine running as root.
So, now we are going to deploy a new one, but that runs as non-root.
That means that the security context run as non-root is enabled.
And let me open here the terminal.
So, to compare, ouch.
Because we have get deployments.
So, it's changing.
So, get deployment engines.
Here, output engines dash jamf l.
Oh, what happened?
Sorry.
Output jamf l, engines.jamf l.
And we are going to open this.
Just to check that in the spec, there is no security context here.
Or if it is, it doesn't say anything.
Yeah, in the spec.
So, the spec that we have here includes the container image.
And the name of the container, yeah, of the deployment was the container.
And the security context.
And here is the name, engines, and the terminal.
So, let's create that.
One, and zero, and deployment.
So, let's see.
Well, I'm going to move it in terminals because it's more, there is more space here.
And it will be easier to see.
So, kubectl get bus.
Actually, I have this here.
So, it fails.
It fails.
And it fails because the image from engines is made to run as route.
As route if we try to get some logs.
Logs, path.
Creator content conflict.
We can describe to see more information.
Remember that when stuff like this happens, describe has a lot.
So, the path.
Let's see if it says something else.
Error container has run as route.
And image will run as route.
So, it's clear what the issue is.
And for this case, let's explore a little bit what is happening.
So, in Docker Hub, Docker Hub engines.
So, Docker Hub engines.
If I'm right, we can see the Docker file for this.
Let's find the, which one?
19.3.
Okay, this is the image.
This is the image that we are using.
Ah, it's not.
Because, ah, okay, it's this one.
This is the problem.
So, these are the layers that this image has.
So, let's see if we can see the proper Docker file here.
This is like a closer.
No, actually this is the Docker file.
So, yeah, I mean, if we had, I mean, if we had the engine running as a non-route user,
there will be some page change mode, change user, new user, whatever, here, in order to do that.
So, it's not there.
So, let's install an image that run as non-route.
So, we need to come here.
Oh, wait, let me do it.
We need to run as root and run as user.
Okay, so now what we are going to do is this deployment.
Let's see this other deployment.
This deployment allows us to, or it uses an image that is able to run and run without being root, right?
So, this is the image.
It's called engines and privilege as non-user.
Actually, I'm not sure where we can find that image.
Okay, I think it's from Vietnam.
We can take a look.
Google.
Yeah, okay.
This is the repo, and this should be the Dockerfile.
Let's see if we can find it.
Let me see.
Oh, it's here.
This is from two years ago.
And, okay.
That's the quit of the issue.
So, this is the user 101.
This is the command that creates the user to manage to run this service.
And let's see the template from here because I think that something is missing there.
Yeah, so for example, the argument, the OOID, and see that it creates the group, the user,
and it assigns for this user, it assigns the directory for the engine cache.
It doesn't allow logging.
What else is there?
Okay, here is the sound.
These are the, what else do we have here?
So, this is like the, are you familiar with Dockerfile?
Okay, okay.
So, this, I guess that this probably has like two space to build the image.
And these are the, okay, these are the change owners and change mode.
Okay, this is the part required to run engines as unprivileged.
So, this is like a work between the developer and the administrator in order to decipher that because, I mean, if it's an internal application,
I mean, if there is no, I mean, if within the organization there is no that policy for the developers to create applications that run as unprivileged users,
it will be like, wow, how should we do that?
And that's a lot of work there.
But, okay, that's the idea.
And that's it.
So, we just changed the image and the security context is the same.
So, we are going to apply that and see our image there.
Apply.
This is the 0.2.
It's there.
And sorry, I'm going to move and see the parts that we have.
So, this, it's running.
Gladly.
So, QCTL.
And we are going to accept.
GetPots.
Oh, GetPots, no, sorry.
Accept.
Accept-IT.
We don't need to dash IT, actually.
Just accept POT.
The name of the POT.
Dash-ID.
Which is the 101 that we saw.
In the Docker file.
Awesome.
So, it's running.
So, that's one.
Let's do the IT.
To just.
IT is interactive terminal, which is needed to have a prompt.
So, you can tell the difference.
The dollar sign.
And the, okay, there is, well, there is not much going on here.
But this is like the big part.
See how limited this image is.
I mean, this is what I wanted to show you.
So, you don't have history here.
In this one.
So, I did an OS.
The last one, which is the default process.
Sorry.
And all the process belongs to engine.
So, that's good.
Okay.
Let's move on.
Where is it?
Okay.
Okay.
So, we have that image running.
We have the engines running.
And it uses a different ID.
So, that's cool.
Now, the next thing that we may want to try is to use a user and a group.
Which are like.
I mean, the 101 is a system user, you can say.
I mean, it's within the range of IDs of Linux.
So, for security reasons, it's better to use a higher ID.
So, it doesn't conflict with other IDs.
And it doesn't have privileges as well.
So, we are going to do that.
Let me.
Yeah, it's true.
However, actually, let me tell you.
I mean.
I mean, there may be reasons that you want to use for the user.
Because, for example, the host system may have a 101 user for any reason.
I mean.
So, you may want to try to use a different user than the one that the image is configured.
Because, in this example, this is like a public image.
And everyone knows that it uses the 101.
So, if we use a different one.
Let's do again the ID thing.
I lost the.
Okay, yeah.
I was confused because all the names are quite similar.
So, this is the image we just deployed.
And it should have the 100,000 ID.
And this is the for the screen text scenario.
It's kind of the same.
Because, in this case, we want to run as a specific user.
So, which one is it?
It's this one, right?
And obviously, it's failing.
It's failing.
So, let's see the log.
Let's see if we are able to see any message from it.
So, there it goes.
It actually starts.
But, as there is no permission in this directory for a different user than a root.
It's fine.
Okay, so the security context that we have seen so far are these ones.
The run as the root and run as user and run as group.
Right?
Those are the ones.
So, now we are going to see this one, the allow privilege escalation.
So, this is the deployment.
We are going to do this deployment manually.
So, we can do it from here.
So, there is this image, which is not that common.
But, I mean, there is no comment that the image has the sudo liner.
But, there is one that we can use.
We are going to call this Docker sudo.
Let me type the comment and I'll copy that and send it to you.
Hopefully, I did type the name correctly.
So, let's see if it created the application.
Yeah, it's running.
Cool.
And let's see the locks.
There are no locks.
Okay.
Okay.
Yes, this image is installed sudo.
That's kind of it.
So, let's see what happened.
Oh, let me go to circle.
We created the deployment.
It's already there.
Well, let's see.
Rollout status.
Docker is deploying.
Docker sudo.
Yeah.
I mean, sometimes, as this pod didn't have any locks,
you can check with this rollout status if a deployment has been shipped already or if you have to roll back or whatever.
So, let's execute this pod.
Docker sudo dash dash id.
So, actually, here in the Docker file, we can see that it uses Docker.
The user Docker.
So, it's not like root.
But this user Docker belongs to the sudo group, to the sudo art group.
So, let's execute this command sudo id and see that it became root.
Right?
So, yeah.
So, let's see.
sudo id.
Awesome.
So, now, let's apply the privilege escalation security context for policy.
So, we are going to apply the deployment, which is the same image.
It's the same image.
But we are removing the escalation privilege from here.
And actually, there is a situation in which you will need this.
And when an image has an escalation vulnerability in its libraries or in its binaries, I mean, it has happened that the engines or the kernel, there is a privilege escalation vulnerability that was detected.
There have been a lot of those.
And with this, you prevent that this escalation to happen.
So, that's why it's important, even if there is no sudo binary in the server.
That's another situation.
So, this is the deployment 05.
And let's see the pods.
So, this is the pod.
They allow no privilege escalation.
So, let's do the same with kubectl exec pod.
The network?
Okay.
Sometimes it gets stuck, right?
Sorry.
So, did it work?
Yeah.
Awesome.
So, that's super cool.
And I have this here.
Okay.
So, that was the no privilege escalation.
And let's do this next one.
So, the server's linked.
So, let me show you this.
So, the idea here is to provide less information to an attacker, right?
So, we are going to create a service.
Let's create it.
Create service cluster IP.
Before that, let's see what services do we have.
There is none.
Let me copy this for you.
And let's run a temporal environment.
A temporal pod.
We are going to use an image called DC box, which is like a basic image.
We are going to use some specific version.
I mean, there are so many versions of images that one of the biggest issues on doing this kind of, actually, this kind of training is that,
excuse me.
And we are going to access that pod and execute the command environment and sort that information.
So, this command shows the, like, I mean, it shows all the services that are exposed here, right?
So, if I create an additional service, and let's do that.
Oh, fine.
And, and, and, and, and.
Let's run it again.
I didn't send you the image, the command, sorry.
So, there you can see.
Were you able to copy?
So, now we are going to create, to create a second pod.
But in this pod, we are going to enable the security context in order to disable the service links.
In order to avoid that this information gets leaked to the pod.
So, let me try here this command.
Yeah, so, this has this issue with the service links.
So, because this is the JSON, and perhaps it's missing something.
So, let's try to do this.
Let's export this .jml file.
And we can deploy it from there.
Actually, we already showed it.
Ah, it's in another directory.
Ah, I know.
It's because it's a spec, containers, and then, and then enable service links.
Yeah, I know it's at the same level.
I don't know.
Yeah.
So, let's put this in false.
And let's leave this like this.
And this part needs to be removed.
That is because that is like the status of the, of this.
It's a lot of information.
Let's see if it works with this.
And let's check it here.
Okay, it's there.
QCTL apply-f environment-2.
Perfect.
And getPots.
Okay, nice.
It's there.
Nice, it works.
So, okay, it worked for you as well.
You're super fast.
Cool.
You see?
So, yeah, so, I mean, it leaves the API because it's like needed, but the other information is not available.
So, if a hacker gains access to this spot, it will be harder for him to find other services.
Okay.
Actually, my body is used to Australian time right now because yesterday I had two lunches.
I mean, I lunch in my Colombian time.
I had lunch in Colombian time, but I had lunch again in Australian time.
So, my belly is asking me for food.
No, no, no.
No, we are about to finish this lab.
So, I guess it's better to just finish it.
Well, we're missing some.
Actually, we're missing several, but at least we can work with.
Okay, so, this part is kind of complex.
I mean, we have, I think that we have the deployment here that we can use.
We do have the this hanging deployment, which was the first one.
Let me check.
Yeah, we created deployment like with this comment.
So, we are going to execute a comment that shows us if we have this seccomp policies enabled or not.
So, this will be dash dash, seccomp, add proc one status.
So, it shows us that there is no filter within the container.
And now we are going to deploy with seccomp enabled that same image.
It's just, where is it?
Okay, it's here.
So, this annotation is the one that enables the security policy.
The container.
Yeah, it's different than the other one that says security context, blah, blah, blah.
In this case, it's with an annotation at the seccomp API.
So, let's run it.
And see which parts are created.
Okay, this is run with seccomp.
That is our part.
So, we are going to run the same comment.
I'm going just to repeat the same line and remove the pod or change the pod name.
So, these are the seccomp computing with filters.
Actually, it's similar to a web application filter.
The web application filter scans any request that has a payload on common payloads and filters that.
So, it's kind of what this does.
So, this is the thing.
Well, we can just hear what.
No, actually, yeah.
And let me check.
Let's see the API here.
Because in this API, we can see all the.
Let's see.
What you can do is to have a set of rules.
If you want to go granular.
I mean, a web application has like the.
That is called the fingerprint.
Or it has another name that I.
It's like the.
Yeah, like the fingerprint of the attack.
And it is growing.
Okay.
These are all the.
All the.
Well, not sure if there is all the.
The architecture.
The name of each one of the.
So, this is like a nightmare, right?
And the action that you want to apply to this.
So, I guess.
This is the.
This is the.
Well, this is the same.
So, this is like, I mean.
Like, like, yeah, it's like a nightmare.
I mean, because you need to.
Because this is like the default set.
Available, so.
Okay.
And.
And let's see.
Okay, let's do this.
Start with this next.
Deployment.
Which is the.
The.
So, in this case.
We are going to drop.
All the capabilities.
And then we will need to.
To add.
The ones.
That are required.
For engines to run.
Yep.
So.
So, so, so let's do that.
I use.
Net socket.
Let's use the net socket from the.
Or use the search.
The change route capability.
That is that kind of.
So.
So.
Apply.
Here.
And.
And.
And.
And.
And.
And.
And.
And.
And.
And.
And.
And.
And.
And.
And.
And.
And.
And.
And.
And.
So.
See that.
I mean.
I'm not sure if you remember that when we tried to run.
With a different user.
The message here.
At this point.
Permission denied.
Right.
In this case.
The message is operation.
Right.
Is.
Yeah.
I think if I'm not wrong.
I think this is one.
Yeah.
Permission denied.
Actually.
This is in make there.
And this is after the major.
Change.
So actually.
Actually.
This give us a hint of what is needed.
Because this is a capability.
Being able to change an owner in the face.
Right.
So.
So let's.
Let's go for a break.
For the lunch.
And.
And.
And.
Continue.
Right.
Okay.
So.
See you.
See you.
See you.
In a while.
I mean.
In one hour.
Okay.
Yeah.
Okay.
What we look like.
Who we love.
What our last name is.
How we worship.
A belief that all of us matter.
Right.
Right.
Politics that we're seeing right now.
That doesn't have to be who we are.
We can be better.
And it has nothing to do by the way with political correctness or being too woke.
It's about fundamental values that my grandfather.
From kids.
He just taught me.
Values I grew up with.
Values you grew up with.
Values we try to teach our kids.
Values we learn in churches.
That's what a man just said.
That's who we are.
Hello.
Tell us who you are and tell us a little bit more about you.
I'm Kristi Sellers and I'm from Australia.
Okay.
I'm a buffoon.
I'm a freak.
And tell us what is your act?
Are you dancing?
Yes.
What style of dancing?
I trained a lot of styles of dance and then I just tried pole dancing and then I was like this is amazing.
Pole dancing.
Yes.
So do you think this is a million dollar act?
I hope so.
Alright, good night.
Thank you.
Good night.
Have you tried it?
No.
I'm not athletic at all.
I don't think I even know how to run.
Oh, fuck.
Okay.
So where were we?
Okay.
So we were like in the capabilities section.
Okay.
So we run.
Okay.
Yeah.
So we run a pot and we apply the drop all for the capabilities with the security context.
So it does.
We need to address or enable the ones that are needed by this pot.
What we are going to do is run with Docker the same image, the very same image and as the null caps, null capabilities flag.
So that will be something like Docker run.
Well, let's run it like this.
Cap drop, which is the flag in Docker for removing capabilities.
Oh, and the image engines that were 19.
So we are the.
And it's one 19 three.
So let's see what happens.
So this is going to download the image from the car.
The same image.
And we have the same error.
Let me copy this for you.
And it shows the same message, the change owner error.
So we will need to.
To stop this container.
Okay, it didn't run.
But.
Probably we will need to remove the previous image.
Let's let's try to run it like this.
If not, we will have another flag.
So we are going to drop all the capabilities and now we are going to add.
The cap change owner capabilities.
Let's see what happens.
So.
It advanced because.
If you take a look over the viewer message.
It varies.
So we were, I mean, it was able to.
Configuration complete change owner.
It's there.
In this case, configuration complete.
We don't see the change owner, so it was successful.
And it failed.
With said GID.
So that give us like.
A hint where is the next error.
So this is the capabilities.
That GID.
So let me look for a list of the capabilities.
Because there are a lot.
So.
Exactly.
It's kind of crazy.
So this is the, these are in capabilities.
So these are the ones.
The list.
Change owner, which is the one that we just.
As.
So the, I mean, we need to.
To.
Correlate.
Which capability will we need.
In order to this to run.
So it's kind of.
Gross.
This is the one that we are looking for. Probably.
Cap.set.gid.
So let's see what other are there.
I think that there are more. Probably these are.
The most common ones. So.
Let's try.
So again.
So we have the previous one.
And cap.
As.
Cap.set.gid.
And let's see.
Copy this for you.
Copy this for you.
Copy this for you.
Copy this for you.
Copy this for you.
Okay. So.
So now we have.
Presently what set.gid.
And now we set.uid. So.
I guess that.
We can.
Hardly guess which one is there.
So it's easier to do this in.
In Docker. Because. I mean.
Otherwise we will have to edit the deployment.
But. It will be more.
Like more cumbersome.
Okay. It seems that it went through.
Apparently.
No.
There is something that is happening.
Because it's like kind of stuck there.
But this time it's not showing anything.
So that will be harder.
Docker.ps.
Let's run.
Docker.logs.
1.8.8.
Hmm. Let's see.
Well. We needed. We.
Didn't publish any port.
So let's try this. Let's. Stop this.
And try to.
Try to add.
An additional.
I mean try to run this.
Exposing the port.
Let's use a random port.
Lash p.
A random port.
With Lash p. I mean not a sign.
But let's docker to publish.
To use a different port.
Just to see if this is running or not.
So it published the 49.153.
So let's see.
Docker host.
49.153.
I guess it's working.
Yeah. Because we are able to see.
It's working.
Those are the caps that we need.
So we can.
What we can do here.
Let me copy this.
Let's try this here.
So we can edit.
The deployment.
And run it.
Or we just can.
Use this. In the lab.
There was this net buying service.
And we can enable.
It didn't show up here.
In our testing. But perhaps let's try another.
Another example.
Let's try to publish the port.
8080.
I mean the 80.
8080.
And see if it shows up there.
No.
It didn't.
So I guess this.
It may be the version or something.
That doesn't have that.
It doesn't require that capability.
Or if it needed.
It's not showing up in this test.
So I guess that perhaps we will see.
So the way to.
I mean this is exactly the same deployment.
But with one. You know we short the capability.
And you just add it.
And it is not using the cap.
Dash.
Because it's like the default.
So it's just.
So let's apply this.
And let's see here.
And.
And choose TPL.
Okay it's failing.
I know.
Wrong.
Wait.
Yeah.
It seems that it's failing.
So let's see what happened there.
It's wrong.
Yeah.
I'm lost.
Oh I see it now.
Okay I see it now.
Qtl.
Yeah it's there.
It's there and well it's not being exposed.
We can.
Let's try to expose this.
This service.
The deployment.
So.
The service.
So Qtl.
Exposed.
Deploy.
Deployment.
Run with certain caps.
And what else we need here.
I think we need to define the port that we want to expose.
But I don't remember the index.
Let me try.
Let's try.
Now what.
Where are we?
Fork.
Target.
Target.
Fork.
And call 8080.
Expand.
And.
Require.
Lord Valen.
Let's see if it allows us to deploy the app.
It's.
Dash name.
And.
And.
And.
Get service.
Perfect.
It worked.
So.
So let's try here.
And local.
And.
172.
180.
Oh wait.
No, I think I did it the wrong way.
Yeah.
80.
Yeah, but I mean.
This was.
And.
Service.
Extra.
External.
And.
And.
Maybe.
And.
And.
Okay.
So.
So yeah.
It is working.
We can.
We can try to edit this.
Kibectl.
We can edit the deployment.
The deployment.
And.
And add.
And remove.
The capability.
This capability.
The netbeam service to see if it affects us here in.
In Kubernetes.
And.
So what I did was to edit the deployment.
We are.
We are cleaning the.
So.
No, it redeployed by itself.
But.
Did you.
Expose the service? Yeah. Okay. Yeah.
So let's see if we have.
We have.
Certain caps. So.
Okay. There is an error. So.
In Docker.
That capability wasn't needed. But it seems that here it's needed.
So let's get the. Let's see the.
The locks.
Try to understand what.
What happened here.
Yeah.
Yeah.
Yeah.
Yeah.
So this is. This is good.
So it happened in Docker. It didn't happen in Docker.
But it happened here in Kubernetes.
So that's why.
That other.
That being service.
It's.
Yes.
So remember that when.
That I mean the exercise that we did.
Was with change owner.
No, we changed route.
So actually.
That will be.
I mean.
For example, in this case.
This capability is unavailable.
So there is no way to execute the change route.
So that's what.
What you will.
Will be doing here.
And.
With certain.
Okay.
Okay.
Okay, let's move to the next one.
Which is.
With only. Route file system.
This is. This is kind of fun.
Let me see.
Another one.
From this batch.
Okay.
This is.
This is the same.
Scenario.
But without privilege.
What will happen.
I guess that this.
Is able to run.
Without any capability.
Because I mean.
Those capabilities require route as well.
So probably.
Let's see what happens here.
This is.
Okay.
So let's see the log.
Log.
Yeah.
It didn't show any error.
So.
And that is because.
That container is running.
With a normal user.
Not with route.
So that helps.
So.
One of these.
Examples that we are going to run.
You can.
You can achieve the same result.
Using a network policy.
Instead of a security.
Component.
So.
So.
We need to create.
A read only.
Or deployment.
With only.
So.
Let's see.
Let's try with this one.
So.
So we have these.
Dockers.
So we can use that.
We can try to use that.
So what we are going to do is just to.
Exit a command in that shell.
So.
So.
And.
Dash dash.
That's why we are using this.
Update.
Which is the.
Or the packages.
List.
So.
I was.
It's not the deployment.
But the.
But the part.
So that will be the start.
And.
What happened?
And.
What is the problem?
What is the problem?
I'm using Docker.
Not.
Not.
I got.
So.
Let's see.
So this is like a standard image.
It downloaded the list of packages.
And was able to.
Like writing the cache.
Of the system.
So what we are going to do is to deploy.
A similar.
Container.
But in this case we are going to add the security context.
Of our read only file system.
Which is.
Read only root file system.
So.
This has this.
I mean in this situation.
You will.
I mean you will need to work with your application.
To see which are the.
Directories needed to be.
To.
I mean to be read write.
So we are going to do that.
So this is the Docker pseudo.
The security context.
So let's apply that.
Q.
QCTL.
I mean.
So.
To.
Enable the read only for the whole directory.
And then you enable the.
Great permission.
Directory per directory.
So it's read only by default.
And you.
Start like the capability.
Okay so.
We deploy these.
And let's see.
The post.
And it's here.
Read only file system.
So we are going to do the same.
Exec.
Read only file system.
So.
So.
So.
So.
So.
So.
There is the error.
You cannot create a temp file.
To work with.
So.
Well in this example.
It's like this.
Or in this example.
This security context.
Prevent the upgrade of the.
Of the container.
Or the installation of the software in the container.
With the.
How can you achieve the same result?
What do you think that will work here?
Using a network policy.
Reaching the same result.
In this case the result was that.
You cannot install any software here.
Yeah.
But will the.
Ingress or an.
Or an.
That will be the one.
That will be the one.
That we may apply here.
For this specific case right?
So let's see.
So here is the example.
In this case is a network policy.
Allow internal traffic only.
And.
The address.
Policy and.
Applies to the.
And to any other.
So if we apply that.
Apply that.
Come again.
Sorry.
Oh yeah.
We learned our lesson.
So yeah actually that will be.
I mean which pod will be affected.
With that rule.
That will be like the question.
So describe pod.
Docker.
Docker sudo.
And.
This.
Okay this is the here is the label.
So if.
It should apply the rule.
To it so.
Let's see this is the previous comment.
And it should.
Time out.
Yeah it's super long.
There is this.
Time out command that you can use.
Before the length.
Yeah.
Yeah.
Yeah.
Yeah.
Yeah.
Yes.
But.
Your appreciation is correct.
It's huge.
So I will say that.
Like the.
Let's say the path that you need to follow is.
First the Docker.
The Docker basics.
And I mean there is one Docker oriented to developers.
And to administrators.
Then Kubernetes.
I mean on top of that Docker the Kubernetes basics.
And then the next step is to.
Aim to developers and other to administrators.
For example the developers.
What that is to build images.
A lot of images and deployments and all that stuff.
The administrator for Kubernetes.
It handles how to backup the etcd.
How to.
Label nodes in order to run certain workloads.
And then the next step is to build.
A network.
After that the advanced Kubernetes.
That makes both of those developers.
And administration.
But going deeper inside.
Then something like rancher for desktop.
Like perhaps open shift.
Then one like this.
And then the next step is to.
Develop the CI CD.
Pipeline because that's another.
Like another monster.
That you can do.
Okay it's there.
We can give that for granted.
So in this case.
Making the read only file system.
And then it affects the functionality.
Of the.
Because I mean.
It runs.
Now let's try to run.
An application that needs to write.
In certain file systems.
So in this case we're going to use.
The.
The.
But.
We're going to enable the security context.
To for read only.
And see what happens.
So let's.
This is the level.
By the name.
We can.
That.
That it.
We can suspect that it.
It was.
It's going to fail.
So.
It's failing.
So let's see the logs.
And the thing.
Here is.
That we need to debug this.
So guess what.
Or which is the best way to debug this.
Okay fail.
Because read only file system. Actually this is nice to see.
If we go back to the previous one.
The message error was operation not permit.
And the previous one.
Before that.
The error was a decent put.
It was not enough permissions.
So it's like different situations right.
So in this case.
In order to.
To debug this.
And try to make it work.
Again you will need to run.
The same container in your local environment.
And.
And.
And again.
And we are running.
Here this because this is.
It shows us an ID.
And we can.
Run this command.
Docker disk.
That show us the differences.
From the image.
And the running container.
Like the image is static.
The image is like saving the disk.
And the disk.
Makes.
To execute a disk between that image.
And the running container.
That way we will be able to know.
Which files remain the same.
And which files have changed.
Or have appeared.
So these are new files.
And the next thing.
Is that we will need.
To.
To make.
Writeable.
This directory.
And this other directory.
And.
Yes.
And we have this.
So how to do this.
I mean there is.
A feature.
That is called the volume mounts.
And the volume mounts.
I mean you can.
You specify.
In the aspect of the.
The deployment.
That you are going to mount.
Certain volume.
Within the image.
So you gave the name.
And the path.
Inside the container.
Inside the deployment.
Of this volume.
And you define.
Where do you want.
Those volumes to point to.
So in this case.
We are using empty beer.
Which is like a property.
That allows to create.
Like a temporary.
Directory.
And you can assign.
The local file system.
From the host.
Where this container is running.
And it will.
That is something like.
It will assign.
A space.
For this container to write.
And you can use.
Or another available option.
Instead of using empty beer.
You can use.
Like.
Or like.
Another storage provider.
Available.
But it depends.
If you are using static files.
Perhaps you will use a volume.
And that static files are available.
In object storage.
Like S3 or NFS.
Resource in the network.
So it's available for your servers.
In this case this is enough.
So we are going to apply this.
Yeah.
So we are defining here.
The security context.
Is enabling.
The read only root file system.
And the volume mounts.
Are defining that for this directory.
We are going to this volume.
And for this we are going to this other.
And for this volume we are defining what is cache.
In this case empty beer.
But it can be any other.
Supported storage volume available.
So we deploy this.
And see here.
GetSpot.
And it should be around here.
Ok. This one.
Empty deers and geeks.
So it's running.
It's running. So there is no issue there.
And we can do the following.
We can exec.
Exec this part.
And ls run to see what is inside.
So ok.
This is the file.
So s-l.
They were just created.
And the var cache.
Engines.
The same.
This were just created.
And we launch the file.
Ok.
So the last one.
Ok. The challenge in this lab is to create.
One deployment.
That.
Applies every one of these.
Context in just one deployment.
So that will take us.
Quite a bit.
So we are just going to go ahead and use the one that is already there.
But this will be like the.
I mean like the best way to try.
This security context.
Because let me see.
Yes.
Ok.
This is.
To deploy a new version.
Of the image or the application.
Or something.
There are several strategies.
To avoid downtime.
So recreate.
That you.
Destroy the current.
Pod.
And create the new one.
So it's like.
Both of the pods are running at the same time.
But one is like in terminating the state.
And the other one is in creating a stage.
And the other one is rolling update.
In which what you do is to.
You have the old version running.
You deploy the new version.
And the old version is still there.
The new version is created.
The health proofs.
The life proofs are run.
And if everything is ok.
The old version is terminated.
And the new version is exposed to the public.
So the rolling update is like the best way.
Because that way you won't ever have.
A timeout or.
Or a downtime.
With recreate you may have that.
In case of failure.
Hmm.
Ok.
So this deployment.
Applies everything of these examples.
In just one deployment.
So this.
The sec com.
Well we didn't execute this by itself.
But is the service account not making it.
Available within the cluster.
The service links.
We did that.
This is the image. They don't privilege one.
Otherwise it will fail.
So this.
It wasn't aware that it would work as well.
And.
The security context.
So there is everything.
The privilege installation.
The route run as non-route.
Run as user.
The drop of all capabilities.
And.
This is fun because they are not allowing any capability.
But that happens because we are using the unprivileged.
Image.
If we try to use the.
The endgings image.
I guess it will be like a nightmare.
To make that happen. I mean it won't work probably.
And the volume mount.
The slash temp.
I guess this image is different.
From the endgings standard one.
Because of this.
The volume mount is slash temp.
And now the other one.
But that really depends on how was built.
The image and how is it configured.
I guess that the endgings file.
It's different in this image.
To the other one.
So let's go on the.
And.
Somewhere is here.
Other one.
The CCL.
It's.
Okay there is an error.
Here.
But it seems that it was able to launch.
Anyway.
We can check.
The.
Let's see.
The slash temp.
Yeah.
All the temp files are created here.
The PID. The process ID.
The temp director.
And we can see.
The endgings.com.
Yeah.
The configuration.
It points to TMP.
The temp director.
And I wonder.
The var la.
Okay.
So in this container.
It's not affecting.
Well actually in a lot of containers.
The actual log for example.
Goes to standard output.
And the error log to standard error.
And that is what we see when we run.
The logs.
This is what we see here.
And let's try.
Let's check a few things.
For example ID.
ID.
Cap.
Pro.
And grip.
The comp.
So there are the sec comps.
And.
Okay the user will just check that.
So kind of like the.
The practice.
And.
We can.
Check if the system is running.
So we can.
I'm going to forward the part.
Forward.
Forward.
From the part.
And.
And.
And.
And.
And if we run.
8080.
It shows that it's working.
So that was successful.
So the thing is.
Let's do this in a.
Internal application.
In an application that was.
Developed in house.
That will be the challenge.
Being able to do that.
And well that was it.
I'm not sure if there is.
Any questions.
Here.
For this.
Security.
Security.
Security.
Security.
Security.
Awesome.
So that is the idea.
Actually.
These two labs.
And.
Like the core of the.
Of the security of.
So that's why we.
Try to focus on that.
But.
Even yet that is true.
The network policies can be like.
What.
So.
Our lunch break.
And we are coming back.
To execute.
The role basis access lab.
The tool bar.
And.
I guess.
To see the.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
The.
It's.
Well, it's kind of similar. Pass, fail.
Let's try the net poll.
And actually, let's see.
That will be interesting. The net poll.
Let's see if it changes.
Changes.
Oh, it's kind of the same.
And let's try the secure.
All are like the same.
Cluster.
It's all different.
Let's see.
Okay. It shows the same.
So this is from outside. If I want to run it.
From inside.
I think that I have the.
The keep bench.
So.
There are like different jobs that you can run.
Within the two bench.
For each component.
Or a general one.
So we are going just to apply this.
The job.
The job.
And.
Here is the keep bench.
So this runs and finish.
It runs and finishes.
So the results are available in the look.
So what you do is to.
To see.
To see that.
And see the difference.
From running.
From the outside.
To running from the inside.
So it only failed in one case.
From the inside.
This is the secure.
So.
You can go.
For the whole.
Look.
And it shows you the warning.
The failures.
And what you need to do.
I'm not seeing the failures here.
This shows for example sitcom profiles.
Well but I cannot see the failures.
I'm not sure why.
This is from the outside.
Let's run it.
Let's run it. Run this.
In a different cluster.
The roads.
Ash. Right.
There we go.
Okay it's here and it completed already.
So let's get the logs.
And see what it shows.
2024.
KR.
So I mean you can see the difference.
These are at least 10 checks failed.
And 47 weren't.
And it explains here.
So this is.
I mean there are multiple tools. This is one.
Because of the relationship with this.
So it's a good way to work.
There is another.
Some more.
This one.
So this cube hunter.
It's kind of cool.
Cube.
Cube hunter.
So.
So.
Okay now yeah.
It's because it's missing the.
The extra check table.
Cube hunter.
Cube hunter.
So this allows you to
scan your network.
Or your public network.
And find the clusters that are
available.
You can.
So.
So.
So.
So.
So.
Well.
I don't know the difference to be honest.
But the 3D is a.
And it checks.
Like different checks.
Different levels of the system.
I mean at the end you may want to execute.
Different testing not only once.
And.
And there is some.
Some more here in the.
In the what?
In the repos that I want to show you.
Let's run this.
The IP range.
And we can use the one.
The range.
That we have.
Publish it.
So this is able to detect different.
Different servers and shows you.
Okay this is open.
And you can work with this.
This is pretty good because I mean.
This needs to be closed with firewall.
For example. I mean.
Not inside Qubornet.
But outside of Qubornet.
So this one is super cool.
There is another one.
Qubaudit.
So Qubaudit.
This one. Actually we should have used this.
When we were using the secure context.
Because this one.
Audit all that stuff.
Almost all that stuff.
So you run Qaudit all.
And this uses the context.
And this is the context that is in your system.
So let's give it a little bit.
Okay.
It's already.
So actually if you take a look.
Every one of these tools.
Show you like.
Different information.
That so.
Maybe not enough.
To take.
Like a good view of your system.
So.
Exactly.
And this is super beautiful because it's more like.
I mean for example this shows you.
Okay you are using.
An old version of Qubornet.
And this will die.
And 3D.
We have 3D here.
So 3D.
It allows you to work with.
Docker or the Qubornet.
Or whatever. Or the file system.
Or a repository.
So if we use 3DK8S.
We need to.
Give the.
I don't remember.
Yes so.
What do you want to scan a deployment?
Or the.
Or the whole cluster.
Or the whole.
So K8S cluster.
A second.
And you have.
And.
I think that.
All the tools that we have available.
Oh there is.
This is another one. This is not.
Like a tool.
But this is more like.
Like.
A system like.
This is able to detect.
Intrusion.
Or.
Executions that are outside of the policy.
Or violations.
If someone.
Executes an escalation.
Or change something.
This is really powerful.
But this requires.
That your system is either running.
On bare metal.
Or in a virtual machine.
To be able to be installed.
So we cannot give it a try here.
Because it won't work at all.
But this is a good tool.
And I think that.
I think that.
It will help you.
To detect stuff that is maybe happening.
And I mean this is so huge that you won't be able.
To do it by yourself.
It's downloaded.
Well I would say that.
You.
The report if you already have a copy.
In your laptop.
Here are the credentials.
So you can try to.
Clone this report with the updates.
I'm probably going to update the history.
Of today's session.
And.
And see if the presentation is the last version.
And that again tomorrow.
So you have a copy of the material.
And the history of everything.
That we did here.
I'm not sure if.
Caramelize around.
But the idea is.
I mean as you were able to see.
This is like an approach.
To this world.
It's kind of big.
The idea is that you have at least.
Like the.
Like the tools.
To go by your own.
And be more in this world.
And explore.
So if you need anything else.
You can ask us.
And ask for anything that you may want.
So we will have.
Help you.
To stop those hackers or to install more clusters.
Or wherever you need.
And I don't know if there is something that you want me to explain.
Further or any other question.
Otherwise.
I'm going to stop there.
And I'm going to stop.
And attention and you are really fast.
And very smart.
These are super hard comments.
To just copy.
And by yourself.
Congratulations on that.
And I hope you have learned a little bit.
No.
I'm going to go.
Have a great one.
Bye bye.