2:25:36
2024-12-16 09:21:57
2:03:59
2024-12-16 11:06:33
1:46:52
2024-12-16 13:50:28
1:56:02
2024-12-17 09:35:43
1:52:29
2024-12-17 09:39:32
1:51:17
2024-12-17 13:36:38
1:35:13
2024-12-18 08:59:20
23:23
2024-12-18 11:50:59
1:47:49
2024-12-18 13:41:04
Visit the Docker and Kubernetes Training Course course recordings page
WEBVTT
-->
Okay, so running or gene
-->
So can you delete okay fine, can you delete your can you run the command kubectl delete
-->
Is gene correct?
-->
Gene can you delete kubectl delete
-->
pause
-->
Space hyphen f and all
-->
all
-->
Also delete kubectl delete jobs jobs. Yeah enter
-->
Cron job
-->
Yeah, no space
-->
No, no, yes. Yeah
-->
Just use upper. Yeah noise
-->
I'm not going
-->
Can you mention the name of it? Okay, delete cron job. Hello is hello. Okay fine. So now refresh the command of get pods
-->
hyphen all spaces
-->
Okay, all good fine you're good right now, uh, yes genuine and chance
-->
How to
-->
Okay, how do you log in the server is that a question
-->
Yeah
-->
Yes
-->
First log in the master master you
-->
Go to the master. Yes
-->
Is this the ip of the master?
-->
No, no, no one second one second
-->
No, that's not a master. The master is still the user
-->
233 correct master is ending with 233 chance one. I'll write you again master ip which you're using before
-->
Is uh 52
-->
In chat. Yeah confirm these three
-->
52 is uh, yeah, and you will log in this is what you used before and then maximize
-->
Type exit. Yes now run that command ssh command what you ran now
-->
You already have work of one
-->
Please understand once again once again
-->
What are you trying to do master? Yeah, I think you're right. Yeah lunch once more
-->
one once more
-->
is
-->
Hyphen pseudo su space hyphen
-->
I can see it. We're typing in
-->
pseudo space su space hyphen press enter and please type in clear. It is pseudo space su
-->
space hyphen
-->
Okay
-->
Send me once again do this al pi as
-->
a
-->
as
-->
Yeah, right as space k
-->
k k k
-->
alphabet k
-->
Let me type the command
-->
Let me type not this
-->
it is
-->
In this
-->
Equals to cdl yes in any time
-->
Any time that is uh, you've done that above command use your app add on your keyboard once more once more
-->
Yes
-->
Yeah, now let's see this. Okay. Can you do a delete cubes k delete?
-->
K delete space
-->
Bots spots space space your space is bad. Yeah hyphen hyphen hyphen hyphen hyphen hyphen two hyphens
-->
I don't see a ll l
-->
Okay enter now also do a k delete k delete
-->
K
-->
K
-->
C r o n no e remove the es after cron job give a space space space
-->
And hello. Hello h e l l o enter
-->
Enter what did you do? Okay. No problem. Let's proceed. Uh, i'm sure why is this one
-->
Let's proceed no issues
-->
Okay others
-->
Not good, okay, so I think it's better now let's the crash loop back off is not going to get shown
-->
Let's see how does work the next topic
-->
Any questions that you have a question?
-->
Let me go down
-->
So what is this
-->
What
-->
Better, okay. Okay. So what are services services simple example?
-->
It's again blocking not clear voice one second one second let
-->
Is it better now? Okay. Okay. Now yes
-->
Is good
-->
All right. So what what is the service?
-->
Why do you use services a simple example?
-->
You have answered before you in this slide
-->
What's happening in here?
-->
Okay, was the answer was the ip changes is that the reason you're right
-->
So whenever you stop and start or restart the container the pod ip changes you agree it here
-->
So we can have a static ip
-->
Instead of reaching directly the back end part the user can reach to this
-->
Static ip which the ip doesn't change
-->
And that allows to reach your back end clear
-->
So one is cluster service which were internal services
-->
It means if this wants to communicate with this it reaches the service
-->
And reaches in here
-->
And similarly if this wants to communicate this it reaches to
-->
This and reaches here. Okay
-->
So let me I think this is taking because of the problem. Let me just stop the one second
-->
It's doing a conversion of the video
-->
I think that may do later. This is reading the mode manager. Yeah, so this is a cluster service
-->
In which it takes
-->
And reaches the back end using services
-->
Was it talked about?
-->
node port
-->
In node port it is still a cluster service
-->
And it also exposes an external port like 30 000
-->
On all the nodes
-->
So any traffic comes from the public ip of this machine they say public ip
-->
Colon 30 000 public master or worker node
-->
Traffic coming here
-->
And then reaches the cluster service and then reaches in here
-->
Yeah, if you want to reach using the worker owner ip I can use public ips
-->
Same colon 30 000 it comes in here and then come in here same in here
-->
So this is normally used in your staging area for testing internally
-->
And once this is tested fine, then we go on to load balancer
-->
In which you can create a cloud load balancer
-->
Or we can also use ingress controller options and then the traffic now for example if 10 users log in
-->
the website
-->
3 would go for this 3 or 4
-->
3 and 3
-->
So it's also load balancer workload so that any service doesn't get overloaded
-->
And then it goes to the internet service same happens, but it goes
-->
Much better management of the load is good
-->
Now we will do hands-on labs now in the labs. We have the lab under application networking
-->
And the lab point
-->
I'll do some i'll do this observe
-->
And then you want to start
-->
Let me stop these let me use the existing ones. I wanted to see what the problem
-->
is
-->
I go to this
-->
And I would right click
-->
Now what is going to create so this created a pod
-->
Named backend
-->
It also create a service named backend
-->
Both having the same label which is role equal to backend
-->
So you're mapping your service
-->
And your pod with the same label names
-->
And the exposed word is what binding the service to the pod
-->
Yeah, let me show you this. So now if I say get pods
-->
I can go and buy it
-->
You see that the pod got created the backend and the pod ip is 10 44 0.2
-->
Now i'll do a get service and this service got created now which is backend
-->
Now I would do a describe
-->
Service and the name of the service now if I do this
-->
This is your static ip of your service and this is mapping your endpoint. You see endpoint 10 44 0.2
-->
So you see that is the ip of your pod
-->
So instead of me directly I can directly reach pod by doing this curl 10 44.2
-->
But this ip might change
-->
So instead of using this ip, I want to use the ip which is your static ip
-->
So now what i'm going to do is i'm going to describe i'm going to say curl
-->
I'm going to run this command 10 102
-->
126
-->
Dot one three you see i'm able to reach
-->
Did you understand
-->
Everyone
-->
So instead of reaching using a pod ip you're able to reach the backend and you can do this on all the nodes
-->
You can do the master once you do it with this observe it and try to reach
-->
The pod on all the nodes using the service will probably put your master and worker nodes and all that
-->
Yeah
-->
Start doing it now. So first do the master and then check we have to reach the ip on all the nodes
-->
You
-->
Yeah, okay one is right working
-->
What what's happening sure yeah one minute
-->
So
-->
Let's finish this so curl 10 102
-->
And then talk about 126 and 13
-->
So like this you can do on all the nodes fine, so start doing at your end
-->
Is charles
-->
Okay, so once again spelling is yes
-->
Yeah, yeah, no, no one second
-->
Remove one just have su enter set out
-->
No, why are you using worker one
-->
Yesterday configured can you go to master machine, please do a cube ctl for every node
-->
You have to write that you have to write it alias
-->
If you don't have it, you have to write it manually on every node there whenever you log off and re login
-->
You have to again write that command alias or getting it alias k equal to cube ctl
-->
Okay, can you now type in?
-->
Yeah, you already did a master before okay enter please enter get nodes
-->
K get nodes
-->
Nodes
-->
Enter
-->
So you suppose login worker three and worker four
-->
You
-->
Are we good
-->
charles
-->
I gave it two ips correct. I gave it two ips
-->
Don't log into that close the worker one piece close the worker one in the back end close it
-->
You get confused again close this click on close
-->
Yeah, click on close window
-->
Fine, so now no no now in this version now login to the worker three
-->
The ip which I gave you today, I mean I give you a copy script i'm not sure I give the ips to you in the morning
-->
Yeah, so 193 is please note down somewhere, please
-->
worker three
-->
Yeah
-->
Are we clear
-->
So in any machine if the k doesn't work, it means that you might have logged off or something
-->
You have to use alias k equal to cube ctl if not here type in the exact command cube ctl clear
-->
Fine now proceed you're good now alias k equal cube ctl yes
-->
One second on worker three one second one second. We don't use cube ctl on a worker nodes are getting it
-->
We used cube ctl only on the master because the interface is your master
-->
That's fine. You've done it. That's fine
-->
No need to worry about it, but we don't use cube ctl on your master. We use one on the worker
-->
We use one in the master clear
-->
So any work we're doing it right now the k would be on your master clear
-->
Thank you others any questions uh junbi are you good? Okay, are you understanding what's happening?
-->
now let's
-->
Proceed uh charles you're doing the lab. Okay. Are you doing the lab now?
-->
Okay, next one the continuation of the same this is your pod
-->
But in the pod we cannot scale up so we preferably want to use
-->
So before we go ahead
-->
You can also go ahead
-->
and delete
-->
the
-->
Delete your pod build a service first
-->
Delete your service
-->
Okay, delete svc in the packet
-->
It doesn't affect the existing pod it still runs, but the ip is gone
-->
So now when you try to access the same would it before
-->
It will not work because the cluster service deleted
-->
Okay
-->
Yeah
-->
So after you do this now
-->
You're good
-->
So let's go ahead
-->
The master master machine you can also delete
-->
The pods
-->
So next we are now going to do
-->
Operation using deployment now
-->
I'm going to create deployment
-->
And we're going to map the deployment to the service
-->
Okay, let me do this
-->
I wanted to observe ask questions and then do it so that you understand the concept
-->
So this is my deployment
-->
of one copy
-->
And uh, this is your application
-->
I'm going to create this i'm going to map this to my service so copy this and I go to my server
-->
I would say class
-->
Name of my deployment give any name
-->
insert mode
-->
And I would paste this
-->
Paste
-->
Okay, so it's pasted
-->
So now i'm going to run this now and say apply
-->
FNF
-->
the class step
-->
Will it get created?
-->
So say all you can also all get all it show everything what is there so it's created your deployment
-->
it's created your
-->
Replica set it has created the pod it's still doing the pod
-->
The pod has got created
-->
So now i'm going to map this pod which is 1044 0.2
-->
0.1. Okay first create this
-->
Run the deployment
-->
And observe does it create your deployment replica set and the pod?
-->
Yeah charge what you're doing have you saved the file
-->
Now you want to say cube k
-->
Apply
-->
K apply no no no
-->
Cube ctl
-->
By cube it is cube ctl or k use k you already use the word k before apply minus f plus dot yaml
-->
FNF and the file name space file name, okay
-->
I want to understand charge is your internet slow because I see others are faster uses the internet slow today
-->
I'm not sure I'm able to see other terminals getting loaded faster. Yours is very slow. I'm not sure fine
-->
Yeah plus dot yaml dot yaml
-->
dot yaml
-->
Enter yeah, now do a k
-->
Type in k space get space all
-->
All
-->
Enter
-->
Yeah, you see now
-->
You see that the pod has got created deployment is running and all you got created, okay
-->
Now i'm going to next step
-->
The next step is I want to map this deployment
-->
To a service i'm going to create a service
-->
You see here. This is a kind
-->
service
-->
Name of the service
-->
And the type is cluster ip because I want for internal
-->
And this port is for q proxy
-->
This is your port on your container
-->
So to call this tcp
-->
And your service and the pod needs to get mapped and hence you are using labels and selectors
-->
So this is going to map your service to the pod
-->
Which you created using your prompt
-->
Let's go ahead before we do this we want to delete the previous one we did so now i'll say plus
-->
svc dot yaml
-->
And i will paste this
-->
I'll bound this down
-->
k apply hyphen f and file name you can use the tab
-->
Now we'll check svc
-->
Now I do a disk file
-->
Svc and the name is nginx hyphen svc
-->
So now it says this is a service name
-->
Which namespace default this is the label in the selector the type of cluster ip single stack ip before
-->
and this is your
-->
cluster service
-->
So now let's just say
-->
curl 10 109
-->
Dot 186 whatever ip you have you go ahead and try this
-->
Do the same on all the nodes
-->
Yes next task for you all create
-->
The service and try to reach the service on all the nodes
-->
On master worker one and work it too
-->
good
-->
Then we good
-->
On all the nodes I see junv is good ck also is good. Okay, so it's working
-->
so now
-->
What we now do is we would now
-->
scale up
-->
The question for you all so now if I scale up my deployment
-->
It'll create new points. Do I need to add them manually to the service?
-->
If you create the scale up and deployment do I need to add them manually
-->
To the service what do you think so for example scale up and deployment
-->
Do I need to add them to service again or would they get added automatically?
-->
Yes, why it is because they have it they're using the same label names getting clear
-->
Since they use the same label they get mapped. So now scale up
-->
And check in the service now and you'll see that
-->
The new pods which came with these ips even they're getting mapped automatically
-->
So now first time if we hit it the traffic it'll go to this pod
-->
The second time someone hits the service static ip will go to second pod
-->
If someone hits again the same ip will go to third pod
-->
So it is also acting as your internal load balancer. Yep
-->
To next step, please
-->
scale up
-->
And try to reach
-->
You have to reach you cannot see where it's going. You can you want to check the logs of it?
-->
But you can able to see some information here
-->
Yeah
-->
Okay
-->
Are we good any questions? Yeah. Yes
-->
This is internal for example, if you have an application and database
-->
The application database is trying to communicate internally not outside. So for so
-->
But now this is this could be an application
-->
And this could be your database for example here
-->
When the applicant want to communicate with this it will go to hit this it reaches in here
-->
Yes, so now if you want to do it now we can again do it
-->
You have to create one more deployment and then you can map it
-->
So you have to create one more service if you want to we can create one more service
-->
The different application try to do it
-->
But this happening internally here. Okay now
-->
Now we'll look into node port method. So now let before that let's delete the existing ones
-->
So delete your deployment and delete your service
-->
They mentioned the deployment name or delete deploy
-->
or deployment
-->
Hyphen hyphen all
-->
And you can do a get all
-->
You'll have a default service everything is gone
-->
Okay
-->
Okay, good fine now let's do the node port now so the node port
-->
The lab 2 yeah, we will deploy again the same one
-->
same deployment
-->
And then we are going to map this to a node port in this now
-->
You see here the service
-->
Type is node port
-->
And it is you also mentioned one more node port number
-->
So this will create the cluster service first
-->
I will also map
-->
The node port on top once you do this
-->
It gives us cluster service
-->
And it also maps a node port
-->
So traffic comes first from internet it comes to port 80 to this and then reaches your cluster service and reaches your internet
-->
And you can access
-->
Public IP colon the node port number on all the boxes
-->
In the browser
-->
And you will be able to see this page. Yeah
-->
start
-->
You
-->
We can check it out you can do a curl for example do a curl a cluster service
-->
You'll be able to reach it internally, but it also creates one top
-->
Yeah, so now you can do anywhere you can do any any IP wherever you are you can play around so for example
-->
First IP is in this example. So now what I do is
-->
I would copy that IP and
-->
HTTP colon
-->
31009 do the same for all the IPs. So we see that
-->
Cluster or the service is not part of a node it across the cluster. This is what we understand
-->
You can also copy this
-->
And run it on any machine any IP you can do so you can master machine from master
-->
You can try to use a public IP of your worker one or worker two
-->
You can also do anywhere anything any IP
-->
You can also use a public IP
-->
Curl and IP
-->
That's
-->
What when you do one second charge
-->
It is curl not this IP. So do a
-->
k
-->
Get service first
-->
Type in k get svc now
-->
You have to use curl the public IP
-->
Of the master not the private IP public IP not here
-->
The one the IP which is logged in
-->
The IP using which is logged into the machine the master IP
-->
Service IP IP
-->
Public IP any public IP we can try either worker worker three or worker four any IP. No, no one second
-->
One second at a hold. Yeah, do a power do a power
-->
colon
-->
colon
-->
No, no, no same command at the end of the command
-->
Can you do the same command now, please go back to same command
-->
Use the up arrow
-->
Not this yet. Wait colon press colon
-->
The end press colon shift and colon not three one
-->
Zero zero nine one the port number you see number
-->
The number nine. Okay, press enter. Okay
-->
So what did we do?
-->
Is the IP right? Let me confirm
-->
Such as are you sure the IP is right one second. Where is your deployment really clear deployment? Okay do
-->
then
-->
It does region deployment. Correct. So write the deployment file now again
-->
Do a k get deploy
-->
Once again
-->
K get get get k get deploy
-->
k
-->
space get
-->
Paste deploy enter. Yes, so create a deployment first vi you see deployment file
-->
In your document
-->
Create that and then when the render will work clear getting it run the deployment and then do a curl
-->
The same command will work. Okay clear. Okay
-->
Others are we clear everyone you can curl to any ip. For example from the master you can put in the public app your worker one
-->
It will work
-->
From your worker two you can try to use the public app of your master according 3109 to work
-->
The cluster is completely part. It is service is part of a cluster anywhere to work
-->
Yes, so what's it now let me say that how it is coming from other node. How is it going? Is there a question?
-->
Yeah, yeah, it's because your cluster the service
-->
Doesn't belong to one node. It is part of a complete cluster getting it
-->
Your service doesn't belong to one node
-->
So it is all this is common across one any nodes of all the cluster so it reads a master
-->
Worker one worker two because the cluster the service is not part of one node. I'm getting it
-->
It's for the complete cluster
-->
And hence you'll be able to reach on all the nodes answer question. Yeah. Yeah, so are you able to reach on all the nodes?
-->
Uh, is it uh, who is speaking now? Is this ck? Okay. Okay fine
-->
So this is how we talk about
-->
Now this is used one day for testing. Why tell me we can we give public app with your customers?
-->
We cannot get it
-->
We do one day for testing and then on top of this we create a load balancer
-->
So now so how do we do for load balancer?
-->
It's a demo we cannot do it because it uh, you need to lots of work in case you want to have a load balancer
-->
Of amazon to do it
-->
into barium
-->
But I just tell the process we have to do lots of work
-->
But once you're done
-->
the syntax is this
-->
So instead of the
-->
You update this load balancer and this will create a load balancer
-->
Of amazon could be like classic load balancer or you can create application load balancer
-->
If you really want to do this with qadm or other methods then there's other method called kops method
-->
So you have to deploy your cluster using kops
-->
If you deploy using kops you can get in the load balancers and other ones you can observe it
-->
But this is very very specific to amazon
-->
And this is advanced
-->
Yeah, so you have to use kops method or or eks method you can use it
-->
Yeah
-->
So this is about uh services. We talked about water cluster service
-->
We talked about node put and we also talked about here
-->
We'll again come back to load balancing using ingress controller in the last topic. Okay, charles. Are you good? Did you create?
-->
Uh the deployment can you do a get k get deploy
-->
K get deploy, okay do ls
-->
Do ls, please. Where is your yaml file of your deployment? Can you do a cat?
-->
cat of that file
-->
cat node dot yaml yes now one second come down
-->
This is service with the deployment file deployment. Can you go to document?
-->
Okay, i'll give it which is not this is round let me one second
-->
No, one second in the chat i'm going to paste a yaml file copy that to a new file and burn it
-->
So in the chat i've pasted the yaml file
-->
Create a new yaml vi
-->
And copy that so vi give some name so you can say deploy dot yaml
-->
Deploy dot yaml
-->
Enter
-->
Yes, I
-->
Copy the yaml which is in the chat. Yeah paste it. Yes escape save this file save
-->
Apply fnf
-->
And this file name deploy just type type in your tab you can use tab
-->
Spelling is wrong. No, no, no use the tab key. You have tab tab key on your keyboard tab. Yes enter
-->
Okay, now do a k get service k get svc
-->
Enter so now what you do is
-->
Do a curl
-->
No, no curl c url
-->
Http
-->
Curl
-->
p colon
-->
Double slash and go to the public ip of your vm any vm public ip test it. Yeah
-->
Don't one second. Yeah paste it and then yeah paste it colon 31009. Are we clear?
-->
So you can do a curl of any ip or the other machines also in here or you can also copy that
-->
Yeah, do the other one do htp colon
-->
Double slash any other ip of your worker one colon 31009. Are we good?
-->
You can also copy the htp url
-->
The copy htp url the browser
-->
to your system's system browser
-->
And able to you can also see the web page. Yeah copy the htp url colon
-->
And be good. So you can do not in this you will be able to do this on all the public ips
-->
Worker one public ip worker two public ip workers all those them getting it concept
-->
Okay, thank you. Thank you. Now next topic is storage. So do you remember this lab which we did empty directory?
-->
Do you recollect empty directory in which you created a part and then you went into where lib cubelet
-->
Ports and once it deleted the part you lost the data the memory. Are we clear? Okay
-->
Do you want to once more this lab? Yeah, do it now if you're doubtful
-->
The concept not clear do it now
-->
So before we start same go ahead and delete your previous resources
-->
So delete this we see all
-->
And delete the deployment all
-->
Okay
-->
Do this yourself what I talked about it
-->
So
-->
I want you to do the lab yourself and do this next lab in your storage storage do the lab one
-->
Yeah
-->
Yes, are you able to do it the lab
-->
Yes, charles any questions under parts, yeah press enter and remove the additional for yeah
-->
Enter press one second one second do ls-ls. Yeah
-->
Space okay, you have to wait. Uh charles ls. Okay type in ls
-->
Space hyphen l and enter yeah now go to the latest one see the time now
-->
You created a part
-->
See the 558
-->
Not that one the 558
-->
Yes go inside this
-->
Cd
-->
Type in cd
-->
And just type in d just type in d a just type in d and tap not paste it tap tap tap on a keyboard
-->
Okay, okay, and then press enter. Yeah, uh typing clear. I can't see a screen typing clear
-->
Please yeah now do a ls enter cd space volumes
-->
Cd space volumes enter less wrong fine. I think you have gone to a different one. So do a cd dot dot
-->
Did you create that part charles quit
-->
Could tell me what exactly did you do before this step? Okay, do a cd dot dot one second. So now
-->
Can you run a command? Uh, let me give a command type in cube ctl. What is the pod name of yours?
-->
What the pod name it got created?
-->
Do a cube ctl get pods
-->
Do a cube ctl get pods cube ctl
-->
Get pods present
-->
Okay
-->
Okay, let me give a command you want to paste the command the chat. Okay, i'm pasting it just run that command
-->
Okay, can you copy and paste the command?
-->
Which again a chat enter
-->
Yes, so you're supposed to go into this folder clear
-->
which is
-->
Okay
-->
so now
-->
Do ls-l ls-l now go to this one
-->
Where is this one second one second, okay do ls. Uh, i'm not sure back end
-->
Did you create a pod once again once again? Give me a minute
-->
I don't see this part
-->
Where is this part? Okay. Okay. Okay. Once again
-->
Once again, you're doing a mistake
-->
Charles
-->
You're supposed to check this on the worker node where it got created
-->
Not in the master here. So can you now do a cube ctl get pods hyphen o y
-->
Again on the command cube ctl k get pods
-->
k
-->
You have a short form correct? You have k or you created before
-->
k get pods hyphen o
-->
Hyphen o space white
-->
Yes
-->
Now which one is going to go to worker four in worker four you're supposed to go to this and search the folder
-->
Are you clear? So in worker four do the same what you did?
-->
And you have the name of the folder clear. Okay others are you clear? So this is a temporary directory
-->
Okay, let's wait
-->
Yes, gene your question
-->
Yes, yeah do ls go to volumes ls
-->
No, this is wrong one the wrong one. Can you now so can you go to the master first?
-->
In the master I give it in the chat one command
-->
So can you paste that command of getting the pod id pod id your pod id is wrong. You're getting confused
-->
So can run the command in the chat and give it a command the chat run the command to get the pod id
-->
Yeah enter they're supposed to go into this folder in your worker two
-->
E4. Yeah. Yeah. So go to cd dot dot cd dot dot
-->
Once more cd dot dot is now
-->
Now do a cd and say just type e e
-->
Just type in e and tap. Yeah, then the so e4 e4 c
-->
if type
-->
Okay now
-->
Are you clear?
-->
So when you have a confusion about which part you have to get the pod id go inside it
-->
The same now volumes first one type in e type in e and tap
-->
Yeah, go inside it cd cache
-->
Do ls
-->
Yeah, now try to create a
-->
Now no files are there correct. So try to create a file
-->
Touch example dot txt
-->
Yeah, now do ls now go inside your container which is mapped and check whether you see that file
-->
In your master login to the container so which part is supposed to check you're supposed to check the path of
-->
slash h a 1 you are you so
-->
Ca ch e
-->
Ca ch e
-->
No, no ca
-->
Okay
-->
See yeah, that's it
-->
enter
-->
ls
-->
Now try to create a file in here
-->
Create a file touch file touch some file. Yeah some other name
-->
Dot txt now go to the enter now go to the worker node do ls you see this now
-->
Are we getting it?
-->
Yeah, yes. Yes, you did you did it the wrong one before
-->
So now now delete the pod and see what happens
-->
In your master come out of this you're in the container now, yes
-->
Delete the pod delete part and the part name test hyphen type wanting all that also anything. Yeah. Yeah. Yeah
-->
Yes, now go to your worker one
-->
and uh
-->
Yeah, then that's what the node worker took here. So worker two now do ls. You don't see the data clear
-->
The data lost so empty directory is running for temporary storage
-->
But delete the uh pod you will lose the storage. Yeah
-->
Because because you see when you know get pods hyphen o while
-->
This is the output of get pods hyphen o while it says which work node above that above that
-->
Above that above that above up up. Yes
-->
Yes. Yes that output is yes, you see yes
-->
No
-->
This is desired by kubernetes
-->
Thank you. Okay others. Are we good? Okay, so I want to understand i'm not including one module. Do you also work on scheduling?
-->
Would you schedule the parts yourself or done by someone else? So what a question is we have in different uh
-->
policies for scheduling
-->
Is it required or not? Do you want to learn about how to schedule different policies? Is that a use case?
-->
So for example you you can say that I want a pod to get created on a specific node
-->
Or on some other node you can write in some policies of scheduling
-->
is that
-->
You require
-->
To be learned or no. Okay. Can I use the replica set for empty directory? Is that a question?
-->
Yes. Yeah. Yeah. Yes. No, no
-->
It it does the load balancing
-->
It'll create on both the nodes. You can check it now. Let's get let me give a command
-->
Okay, so like like run a command give a small command observe it. Okay, so
-->
You can
-->
run
-->
uh deployment
-->
deployment to
-->
replicas x4
-->
And you made equal to hdpd
-->
Okay
-->
Now i'm going to get command
-->
Run this command on the master can run this command in the master. I'll show you right away
-->
Whose screen i'm seeing is it screen of uh charts? Okay
-->
Okay. Okay. So whose question is now? Uh
-->
Is it doing ck? So secret i've given a command. Can you run the command in the master?
-->
Type it clear first
-->
Please type here. I can't see a screen. Uh down so type it clear and then down the command. Yeah
-->
Okay, now you see here this is going to create four copies of image. Okay enter it now
-->
Now do a k get all
-->
Hyphen o white parts hyphen o white hyphen
-->
Oh
-->
White yeah, you see here it is created on both worker one and worker two two copies. Are we good?
-->
It's okay. So do you want to go only on one node? Is that a use case? You're good. Okay. Yeah
-->
Thank you. This is how it works by default
-->
So to understand my question your question is can I mention a specific worker node to create a part or deployment correct?
-->
Yeah, yeah, we can
-->
So we can so as a reason talking about do you want to loan those?
-->
So let me give an example of what our do does in tax
-->
So let me
-->
So we have
-->
When you go
-->
We have node name
-->
Method this is syntax
-->
You just need to replace the node name with your worker one. Okay, let me run this
-->
So question was can I create a specific node so I can do it. Yes
-->
node name
-->
dot yaml
-->
And I paste that
-->
So now this is going to get created on which node
-->
Yeah, I modify the node
-->
I'll say worker
-->
So clear
-->
I may modify the name of it. This is the name of the part going to be created on worker two
-->
And now I can say to run this
-->
If I say get pods hyphen over y it runs on worker two clear
-->
Yeah, this is syntax we can modify the worker to worker two
-->
You cannot create a master because it's tainted. I'll talk about it later if required
-->
Okay. Yeah, any other questions? Yes, Charles. Are you good? Now? Let me talk about the other types of storage
-->
Then storage we can use why you're talking about storage
-->
Outside is because if you delete the pod you lose the data
-->
And hence you might want to have in the containers or the pod data to be outside
-->
So one option is this is temporary you can have in the storage outside
-->
Temp is a temporary one
-->
You can define a specific path you can say that I want to create in that slash dmp
-->
And this is nfs one nfs. For example, let's consider that you want to have in this folder
-->
Which is outside cubanets
-->
It is not in cubanet is on the outside and this folder could be shared by both the nodes
-->
one and two
-->
And if I have a pod
-->
And if I have a pod in here both these can share in this folder
-->
Say nfs network file system for external sharing
-->
Or we can use a cloud-based method also
-->
So we talked about empty directory
-->
Which we did right now
-->
It creates in the memory when a pod is assigned to a specific nodes
-->
And when you remove the pod if you lose the data is temporary
-->
What is host path in a host path? You can mention either a directory or create
-->
Option in which it created actually on the fly
-->
Or you can mention a specific existing directory
-->
Or it can create a file
-->
or the existing ones
-->
socket
-->
Character all these would support
-->
So example
-->
This is a pod
-->
I want to get created in the path host path you see
-->
Under mnt slash html
-->
So if this path must be existing
-->
So it will go ahead and create
-->
And this will
-->
Get mapped
-->
So whatever I make changes under this
-->
Path will get updated on the host on the specific worker nodes. Yeah
-->
Let's do this now
-->
Let's do the next one host path lab
-->
Which is your lab 2
-->
So example in this
-->
I'm using
-->
a pod
-->
This is my pod name
-->
And this is the type directory or create
-->
So if this path is not there on the node where it gets created this path will get created automatically
-->
And this path is getting mapped to the container because here's the name of the volume
-->
I'm guessing the same name of the volume in my container
-->
So whatever I do in here it gets mapped to this location
-->
Even delete the pod you still have the data have it clear I can also just for same for example if you have file
-->
Or directory I can have two mount points so I can mount a file also
-->
Or directory also
-->
And I'll have two mount points if I make any change in this and this will work. Yes do the lab now everyone
-->
To do it let's get started
-->
So before we start to avoid confusion delete any existing ones
-->
From this part copy this and paste it
-->
So
-->
First we run this
-->
Reply hyphen f host part now this got created see on which node it got created
-->
So now I have to go to
-->
And see if you have the folder
-->
Create it
-->
You see a folder is there try to create a file so I can say host dot ext
-->
And now check if able to access it
-->
So
-->
See that file try to create a file in here
-->
And now you see that file in here
-->
That's it did the part and see the file still existing it will still be there because the host method
-->
So play around then come out of the container and say delete the parts
-->
And you will still be able to see the content the host
-->
So
-->
Yes, please play around it tells you need help now do a do a k get parts hyphen o wide
-->
Do a k get parts
-->
Hyphen o wide
-->
Enter
-->
You went out so type in alias
-->
Type in alias. Yeah, set the k word again from that k cubed c theta run the command again
-->
Okay
-->
You know
-->
Once again k can you do cat cat?
-->
cat uh host space host part
-->
Yeah tab tab
-->
No, no, no after host tab
-->
No space
-->
After host press the tab button
-->
On your keyboard tab, you know what the tab
-->
press it
-->
Enter. Okay. Now
-->
Apply
-->
Please apply
-->
Space
-->
Define name tab
-->
Yes
-->
Now get the parts hyphen o wide
-->
Get parts
-->
Space hyphen o wide
-->
And on which node you got created worker three so go to worker three cd
-->
cd
-->
cd forward slash
-->
var forward slash local
-->
Now var
-->
var r forward slash
-->
You can use a tab press the tab forward slash d d before
-->
uh denver denmark dd
-->
D for denmark
-->
Yeah
-->
Now d for denmark d and tab tab see data coming up. Yeah, press enter center
-->
Okay, create a file touch t o u c h touch space
-->
Host just give a name any name any name of the file dot txt
-->
H o s t yeah, something dot txt enter now go to the master login log into the pod
-->
Cubectl k
-->
K exec space exec log in the pod
-->
Log into your pod which you created
-->
exec hyphen it
-->
Can't you understand please?
-->
You know the commands your practice
-->
Exec space hyphen it and the pod name and the shell nothing your internet is low. Uh, you want to check at your end?
-->
It's very slow at your end. I give it a command in the chat can run that command
-->
Copy and paste that see you wouldn't copy it. So it's somewhere getting stuck
-->
enter
-->
Okay charge, I want to understand is your system slow is the system internet slow or only the server is slow, okay
-->
Fine now go to cd
-->
Uh, let me give you a path one second the chat will give you okay access this path. No, no, no one second
-->
Sorry, sorry, it's the spelling mistake one. Sorry
-->
Uh, so can you use apparel use apparel press apparel?
-->
Yes, now just go to the shared word use the backspace. Yeah, so spelling mistake is there you see share spelling. Yeah s h a r e
-->
a
-->
Enter and press ls. Do you see the file? What do you create?
-->
Yes
-->
Now try to create a file in here do a touch
-->
Give a file name in here touch any other file name not get some some other file name not the same
-->
dot txt enter
-->
Now go to the worker node do ls is a file
-->
here
-->
Now delete the pod go to the master
-->
And do k
-->
k space delete
-->
Space pods type in pods
-->
Space hyphen-hyphen all yes. Now, why is that going off? I think getting this kind of something. Okay, cube ctl delete. No, no, no ct
-->
l
-->
space delete
-->
Space
-->
Spots. Okay. Okay. Sorry. Sorry. Sorry. You are inside the container. Yes type and exit
-->
Sorry, sorry, you're in the container. You cannot do it type and exit. Yes
-->
Now delete the pods here
-->
Delete
-->
Space pods hyphen-hyphen all two hyphens
-->
Two hyphens. So now go back to the host now
-->
The worker node ls you see the files clear. Yeah, this is the lab clear
-->
Okay
-->
Yes team. So this is all about how you could create a host path. Are we good?
-->
Okay, do you want the second one yourself with the file?
-->
The second task in which you might mounting both a file and
-->
The folder of the lab yourself the task two same lab task two
-->
Okay, are we good you must be able to see the file getting mounted automatically
-->
Genuine ck gene
-->
Any questions? Okay. Good. That's are we good?
-->
No, no, no, so what is the example is what is the example html it is your the file path is wrong
-->
So it asked you let me know what you're trying to do
-->
So you created a share
-->
Once again, you created the master cut you have to the it's not created on
-->
worker
-->
To correct it. No, it is a it's an old one. You're not created it
-->
You're not created it
-->
Are you sure you created it? What is the name of the yaml file you gave?
-->
Can you show the yaml file of this in master run the gap do a cat of the yaml file. No, no, no
-->
Don't do like this use apparel
-->
Please use apparel
-->
Yeah, which one was it
-->
One second go up again once more up open this file. No, no go down press down arrow
-->
Okay, can you do the lab again do create a new one
-->
Can you create new one?
-->
Post path, no, no, yeah new dot yaml. Yeah
-->
Before
-->
Let's talk about the next one that task two not this
-->
You did this
-->
So escape colon
-->
You only completed this correct. Why doing again? So press escape then we'll do this escape escape code q
-->
Q no, no
-->
fine escape
-->
Colon escape. Okay. Now do a new one vim
-->
type in vim
-->
Please new type in new dot yaml. Okay enter
-->
Now when you go to the lab document when you go down you see one more yaml file is that
-->
Yes
-->
See on which node it got created
-->
See on which node the pod got created get get get is missing
-->
Okay go to work of four, yes now which path is supposed to go
-->
Which one is supposed to go check your yaml file so it's
-->
Where dot local dot data correct go to the location where local a a cd dot
-->
forward slash bar slash local
-->
Forward slash a a a
-->
after
-->
It is not local. Yeah, it is a a not d type in a
-->
Delete d type in a and tap tap press the tab key. Yes, this is the path now
-->
Press enter do ls create create a file do the same thing. You see the file
-->
Yes
-->
Okay, I think good others. Are we good? We'll continue or you need a short break. Okay
-->
I think the time right now for you is
-->
10 46 is it right?
-->
So break till 11 a.m
-->
You
-->
Hi i'm back
-->
Yes, charles. Okay, who's talking now?
-->
Is charles and whose question is it? Okay charles. Can you hear me now?
-->
Okay, what's your question? I don't understand you. Yeah, okay. What's what's your question now?
-->
What do you get? Can you show me? Okay. It means that
-->
After you create you cannot modify the field
-->
Yes, yes
-->
It'll be a new new part with a different name of the part
-->
Or you have to manually go inside and modify it you cannot modify that parameter because the part is got created
-->
Is mounted
-->
Okay others any questions. Okay, let's proceed now
-->
Now continuing the same storage
-->
We have the next one which is called a storage class or storage profile
-->
Let me talk. What is this? So what is storage class or?
-->
What is called as storage?
-->
profile
-->
sorry class no profile so now
-->
You work on uh, so we work on different environments. So you work on dev in enterprises you work on test
-->
You work on production
-->
You also would be
-->
Requiring for archiving archiving long-term storage
-->
And enterprises we use ssd
-->
Ssd hdd
-->
And magnetic so which one is faster among this ssd which one is faster which disc is faster among these
-->
Is ssd faster hd faster this one is faster this is faster this is the first one
-->
This is the second
-->
and third
-->
Which is costlier?
-->
This is costly this
-->
So now which one would I use for prod?
-->
For live, which this could be good good clear for test workloads and dev
-->
Can we use this and for long term we can use archive
-->
Is the diagram making sense good? So
-->
In your kubernetes, we have to create a storage class or profile for each of these
-->
So what does the backend storage team does so they will create a storage class
-->
kind
-->
Storage class and they mentioned the type
-->
Like ssd
-->
And they say size could be 100 gb
-->
So they create something like this for ssd they create something like this for hdd
-->
So say that size is 100 gb
-->
So these are called as storage class of profiles, so we again have one more kind this is what they do
-->
After this is done what they do is the storage team
-->
This they make it to small portions. So they convert this to something like a persistent volumes
-->
so persistent
-->
Volumes will be one
-->
This will 25 gb. So they create like this
-->
four parts four components
-->
25 25 25 which makes 100
-->
Are we clear what is persistent volume? So we have how many persistent volumes now how many persistent what did we have right now?
-->
As a diagram four, so this is what they do similarly for this also they create
-->
multiple persistent volumes
-->
One two three and four
-->
So persistent volumes are partitioning or splitting up your disk
-->
Into different logical volumes. So this is also called as logical volumes. Okay, so this is done by
-->
The storage team this is this is all done by storage team. We're working on Kubernetes
-->
They create a storage class for different different type of storage and they create the logical volumes
-->
Yeah, now what is next now? Let's consider that you belongs to a devops team
-->
You are a devops team
-->
And you want to use that storage
-->
So we have a storage create in the backend and you want to use that
-->
so your
-->
sme
-->
Or could be your team lead
-->
Of your team
-->
They would create something called as persistent
-->
Volume
-->
claim
-->
They create a claim for the team and then they say that
-->
Give us 50 gb
-->
So they have to create a kind
-->
Person volume claim and they say that I want 50 gb
-->
So this 50 gb is within the team now you are an analyst a developer
-->
And when you create a pod a kind a pod you would refer this
-->
So this will be like pbc1 for example, you would refer that pbc name inside your pod
-->
So this pod requires like 1 gb it is taken from in here
-->
So like this we can create multiple pods or deployments and the storage comes from this storage
-->
So now your team lead can understand how much you are using it
-->
So you're following a process to ask for the storage
-->
similarly
-->
The backend team the storage team they can see who which team asked for the storage
-->
So this is a good process to manage storage
-->
And accountable so tomorrow if someone needs to know which team used the storage
-->
How much the cost all this could be done by using this process you understand the process everyone. Yeah, I repeat
-->
So storage team would create what what all things they do they create the storage profile
-->
And then
-->
They create the pb
-->
The devops team
-->
Would create a pbc and the analyst would start using that in the pods or deployments
-->
This is what story class
-->
Yeah, or the profile talking about it. The story class provides a way for administrators to describe the classes of story they offer
-->
And this is based on different type of
-->
service levels speed
-->
Discover profiles
-->
So example
-->
Type dp2 in amazon gp2 is a type of ssd. So we have in amazon we use gp2
-->
We use io1
-->
Or we can use sd1
-->
Or we can use sc1. These are different types of what we have. So these two use hdd
-->
And these two use ssd
-->
Yeah, i'll talk about these later as of now just think about what we'll talk about all the things that it is
-->
We'll come back to it later
-->
So in a pb, they create something like this
-->
And after this is done
-->
Yeah question, okay after this is done you see this is your pb method
-->
Name
-->
This could be a local don't mention local local means is going to take the storage of your cubanis
-->
Cluster and the path and if you want to have in the storage to be accessed by all the nodes
-->
Post path provides you one in the same node
-->
So we can also use an nfs. So nfs means external storage
-->
Which is on this server?
-->
It could be used by multiple ports running on different nodes as we mentioned
-->
And a claim
-->
You mentioned the claim how much you want in the backend?
-->
And this will be used by the team
-->
That is your pbc for doing the pbc either you can a static method or dynamic method so in a static method
-->
All this must be done
-->
So static method all this must be done
-->
But this sometimes could be slow because the team
-->
Needs to create a pb manually
-->
And then the pbc all these things this is slow
-->
so
-->
This is where this is the method is called as static
-->
Now in the dynamic method we could quick do quickly
-->
wherein
-->
The type of the storage class which for dynamic must be something called
-->
immediate
-->
So when you have any storage class with this words like immediate or it could be wait for first
-->
Customer it'll have in some parameters like this
-->
So when you see something like this, it means that this is a dynamic storage class
-->
Yeah, so what exactly we mean by that so dynamic storage class if your storage profile is dynamic
-->
You can directly go ahead and create something like that. Let me show a small example. You need not create a pb
-->
You need not create a pbc
-->
So directly you would go ahead and mention in your pod
-->
or deployment
-->
directly a template
-->
like this
-->
So this will automatically create a pbc in the back end
-->
It will create a pb in the back end
-->
Automatically, we need not go ahead and create a pb pbc all these things manually
-->
So it creates your pbc
-->
And creates your pb all these things automatically in the back end
-->
That is called as dynamic
-->
So dynamic you need not create a pbc and pb just refer the template in your pod or in your deployment
-->
And that will get created automatically
-->
That is called as dynamic
-->
Why do you want dynamic?
-->
Because of the challenges of the manual
-->
And the same in here i referred that also so there must be a storage class
-->
With need to support a dynamic
-->
And then you can refer that template in your pod or or your deployment and that will take in the resources
-->
So for example if it is in cloud
-->
Or very fast then you could practice to use dynamic again when using this we can define some modes
-->
So you can say access it only by one node
-->
Multiple nodes and what happens so example your sme created a pbc for example, and if you delete the pod
-->
For example, you had a pod created and later you delete all the pods. So the pbc is something you have
-->
so either you can give this claim back to
-->
the
-->
The storage team or you can use it for other project
-->
So retain means
-->
If you have the claim still you can use it for a different project
-->
You need not again ask the storage team give me that storage
-->
Delete means it will delete the pbc the story from your team
-->
It will go back to the storage team and the storage team can use it for different other purpose
-->
So if you don't mention the type it's retained by default delete is something which you need to explicitly mention that
-->
in your type
-->
So in here the default is reclaim the testing so
-->
Very brief, I hope you understood now what the story class we talked about
-->
Or also called storage profiles
-->
How do you create it looks something like this if you see a word immediate or wait for first customer
-->
it means it is
-->
Dynamic if you don't see these are static methods
-->
They can mention reclaim policy for the pbc to retain or delete it
-->
and you can also mention the type and you can also mention the
-->
permissions the dynamic
-->
Or it could be your static
-->
This is your pb
-->
Any questions, so I want you to see some live examples, let's see this
-->
See this link how does it look so you can see this link and observe all these information
-->
Then go down
-->
How do you create all these examples claim precision volume storage class not looking for search for storage class
-->
the right side left side
-->
Whatever you want to search it you'll get some documentation that you need to mention provisioner
-->
Parameters and their rank reclaim quality storage class is one more example
-->
Okay, is this static or dynamic this word class dynamic or static testing?
-->
Please answer
-->
Is static or dynamic you mention chat also the chat piece who wants to try this
-->
Charles gene is a static or dynamic dynamic. How do you say that? Yeah, you're right
-->
It is this one. You see this
-->
Wait for first consumer or immediate if you see it. It is a dynamic. Are you clear? Yes
-->
That's wonderful. So this is an example about retain it means that
-->
If you
-->
It'll go back to it will be still with your team. So this is what we're writing a story class example
-->
So you can mention could be a cloud
-->
Or could be a local local means going to use your storage of your cluster
-->
If you use nfs, for example
-->
We have to refer again the nfs shares a different type so to create again a surface for that class
-->
So for each and every type we create so that
-->
Example you work on vmware
-->
And you want to get your cubanets then you use your vSphere
-->
You create a separate storage class
-->
For the vSphere which will work for a vmware. Yeah, so this would be a static because don't say the word
-->
of
-->
Dynamic or wait for don't see this. This is your static like this you can create for every storage
-->
All these are in classes, this is your dynamic
-->
Okay, are we good? Okay. So now let's do uh, and so on
-->
access
-->
Model lab three so we're going to do a small example of a local
-->
See in cubeadm
-->
You you don't have any storage class. You have to install plugins and other ones. We're going to use a local one right now
-->
So we're going to create a 10 gigabit and we'll see the information by doing describe and get then we create a pbc
-->
Or the size you what you want let's tactic. There's a small challenge
-->
For example, you had 10 gigabit, but you asked for one it'll give you the complete storage
-->
The challenge is there in your tactic
-->
To give a complete one that at least we'll try to see how you can do it
-->
Then we will create a pod and in the pod we refer this pbc name. So refer the claim name as
-->
My claim one or for what i've given in here. I think we're given here my claim one
-->
Sorry, this is your one, okay
-->
I create
-->
And it will use the storage
-->
from your pbc
-->
And this will get mapped to the back end in your host path, which you get the first one the first one the lab
-->
In here it will map it to work. This is on the host
-->
And this is getting mapped onto the container
-->
in this location
-->
Let's say go ahead and create a pod and see which node got created
-->
And you still see the data persisted
-->
So your pb is mapped using the host path method. Okay
-->
Let's start before we get started delete the previous one so that you don't get confused
-->
Go to the master
-->
Delete the pod
-->
And then start the lab
-->
So first create a pb
-->
Do it get describe observe and then the next step
-->
So copy this now
-->
And then at the vipv any name you can give at pb dot yamen
-->
And right click paste
-->
Paste
-->
Here
-->
And save this and now apply at next pv dot yamen
-->
Okay
-->
Is it done what is not okay, let me check your screen and type it clear type clear
-->
Yeah
-->
Okay, which yaml file is this
-->
Can you show the yaml file of this?
-->
I think because of the syntax one second the small error
-->
I can delete this so syntax is wrong. I think some confusion one second
-->
Let me give you the updated one bigger syntax give me a minute
-->
Can we replace that yaml with this replace the yaml with this small?
-->
Change and now try to use this yaml file delete or create a new one
-->
and confirm
-->
You
-->
Again elements like this
-->
Okay
-->
Getting error
-->
Give a different name for pb already had one
-->
Okay modify I think already had the previous one so give pb2
-->
What do you have that
-->
Already had one before okay, I think you can use the existing one also I didn't want them all before does it work
-->
Yeah, good the one which I gave you now it is working
-->
Is dean are you able to create
-->
Now one second, uh, can you tell me what you're trying to do now? Yeah. Okay for which for which one?
-->
Yeah, you can what he can use it try to try to apply define fine. Okay done
-->
Do yeah next command do get pb do okay typing clear right here
-->
K get pb
-->
Do you want to apply it apply it again enter? Yeah
-->
Do okay get pb. Yeah good. Okay for the next step now the next step any question you have
-->
Yeah, the next step now what you do is
-->
Go ahead
-->
and
-->
create a pbc
-->
Okay
-->
Fine
-->
So we have a pbc dot yaml
-->
Let's set more and paste that yaml file. We mentioned volume name. I'm using pb2
-->
So i'm just trying to i'll modify that my end you can use pb1 itself
-->
And
-->
I paste it in here so my you can modify the volume for example make it two minus two viewers one play around
-->
and create
-->
A pbc
-->
Yes, is it bound that you've got bound
-->
So you see a pbc now it's bound you can again do get pb you can see that which claim is using it
-->
Do a get pbc
-->
After creative apply the pbc do a get pbc also do a get pb and you can see so now the storage team can see
-->
Which claim is using it?
-->
Are we clear
-->
Okay, good so now do a get pb, okay good so now let's create a pod and we'll use the claim in the pod
-->
Next step is go down create a pod and you're referring the claim name create a pod
-->
Dot yaml insert mode
-->
And you paste it
-->
Now you want to save this
-->
Apply fnf pod pbc dot yaml so the pod you've got created
-->
Yeah
-->
After the pod got created do a describe pod and see whether do you see the details of the volume
-->
Okay, you can do one more step
-->
Do a describe
-->
Pod
-->
And give the pod name
-->
Just to confirm it is properly created with the volume
-->
And when you see this in describe pod you see this
-->
This is what talking about you see it is using your
-->
Pb claim you see this everyone
-->
Confirmed
-->
After I create the pbs this is so it means that this pod is using this pb
-->
That is very confirmed
-->
Yeah, so now go ahead and check the mapping so the pod what you created the container
-->
It would have got mapped with which location?
-->
For this one the starting one which you created tmp data 01
-->
So see on which node it got created
-->
Go to that location for this this one and see the mapping to the container. So now what you do is
-->
You're going to work it too for me. So i'll go to work it too
-->
And I go to cd tmp
-->
The folder
-->
So this is what is getting mapped so i'll now create a file
-->
See on which node got created and go to the location where the tmp data 01
-->
Which you see under the path in the first yaml file try to create a file
-->
And check if we're able to map it to the container
-->
Which is under user share engine x html
-->
Yep now go to master
-->
So
-->
So that's fine now create a file and say container dot txt
-->
And now you go to the worker node you do ls you see the path is coming in
-->
Are we good
-->
everyone
-->
Junvi ck charles
-->
Okay, okay same delete the pod and see it will be there because we're talking about host path
-->
So this is a simple example of how we can use
-->
static volumes
-->
And the storage it is using is within your cluster you say content still there, okay
-->
so this is
-->
all about
-->
Your storage and we'll talk about tv pbc and these once you're done delete the pod
-->
In the pbc
-->
And delete the pb
-->
Next one, so as of now do you work on microservices
-->
Do you work on microservices or containers in your products are you working on these
-->
In your projects, okay. Have you heard about this 12 factor?
-->
I've heard about this
-->
Let me explain the 12 factor app is the framework
-->
So when you work with containers and kubernetes or any developers for example
-->
Or application we use some frameworks
-->
So 12 factor app is a framework which you want to use when you work with microservices
-->
Let's understand what they say first one they say is use declarative formats
-->
It means you always write in like a code like a yaml. Do not just use imperative method not a command line
-->
So that it is reusable
-->
Second one have a clean contract
-->
It means that whatever you create applications
-->
It must be runnable on any operating system in the future third one
-->
Whatever you create right now on premises it must also run on the cloud tomorrow
-->
So you have to carefully plan that if you're doing some work on on premises
-->
The same what you do must also be runnable on your cloud
-->
Fourth one
-->
When you work on different environments, there has been very minimal changes. For example, if i'm using a yaml in my dev environment
-->
If I want to create a test environment I must not make much changes in my configurations
-->
I'm talking the template talking about using templates
-->
And anytime you must be able to work scale up your workloads
-->
Yeah, so now this is what we are doing it right now. We are doing it declarative
-->
We are following clean contract
-->
We are doing all these in your kubernetes
-->
And apart from these they also say that you want to code base. It means starting in version control like github
-->
dependencies configurations
-->
Now let's go to config
-->
And see what is this
-->
So under config what they say
-->
Do not mention the configurations in the same file where you have the code
-->
So what do they mean by this?
-->
So for example tomorrow you want to create a mysql
-->
Give an example. Let's say you want to create a mysql pod
-->
You want to create a mysql
-->
pod
-->
So for mysql you might want to have in the information of
-->
username
-->
Password
-->
And could be some urls
-->
of
-->
Database you do not want to put it in the main type called as a pod
-->
Don't put this information this file
-->
You want to put this information of username for example in a kind called as config map
-->
So you put this
-->
Username and urs in this and then once you log into the pod you can get the information
-->
So we're separating
-->
Your configurations usernames passwords from the main file
-->
That's what i'm talking about here
-->
So now saying this will let us look into config maps. So config map is what is going to provide you to separate
-->
Your configuration from the main pod file
-->
And secrets is what you're going to separate your passwords and private keys other tech other information very sensitive
-->
You don't want to put in the main code. You're going to put your secrets
-->
So
-->
So what i'm saying here is you create a basic pod just pod with very minimal information
-->
And all the configurations would be in a kind called as config map and when you log into the pod
-->
The pod you get the information of these
-->
either mounted as an enb
-->
Or a volume
-->
And you can start doing your work
-->
Yeah
-->
So let's see this
-->
I'll give a simple example. I want you to let me know this is clear. So this is my config map
-->
In my config map
-->
I'm saying a database name on the url and I create it or apply the details
-->
I describe it
-->
Then I want to use this in my pod
-->
So I would say time pod
-->
Name and say enb from
-->
So whatever data was there in the configuration map, it will get mapped to the enb. I apply it
-->
And then I will log into the part and type in enb
-->
You can see the information. What was there in a config map be clear. Did you understand this? No
-->
Yeah
-->
So this is this is one method there is one challenge in this method the challenge in here is
-->
Tomorrow I can go and update the configurations
-->
I can do apply
-->
But these changes and then you can again do apply for this part
-->
The changes doesn't reflect in the enb because the enb is something like a hard bound they get mounted
-->
It's quite difficult to unmount it
-->
And the enb doesn't give you huge data to be saved. It just gives you some lines
-->
It doesn't the number of limits is then enb. You cannot want to be very big files
-->
So for that purpose what you can do is you want to mount as a volume
-->
So what you now do is instead of using the enb
-->
You can say that I want to mount the config map as a volume. This is a volume. This is a volume name
-->
This is a volume name and you would refer the contents of this would get mapped amount
-->
And this contents could be a map and a contain in the etc config
-->
so now
-->
You apply it
-->
You go ahead
-->
And make some changes and you'll be able to see the changes have updating
-->
And you put a container we go inside this and you see the contained contents what were the config maps
-->
So this is how we can separate the config the config maps are meant for non-sensitive data
-->
So non-sensitive data means like usernames could be public keys
-->
Could be some urls
-->
For those type of things we can use config maps
-->
If any sensitive information is there we have to do very similarly we use secrets
-->
Yeah, so let's start the lab now. So config maps and secret the first lab we're going to do now
-->
Yeah now apply it
-->
You
-->
And you see it has two
-->
Can't see this you could do a describe of other roles do a describe you want
-->
Config map and then the name of the config map