Certified Kubernetes Administrator - Exam Preparation


language: EN

                  WEBVTT
have an unified view of all participant machines if in case if you're facing any
challenges it will be easy for us to troubleshoot together that's the whole
idea so that if I'm not seeing any busy hands here I will simply proceed to
the next set of topic instead of just asking everyone right it's going
to help us a lot by this way yeah you will see multiple tabs here home
desktop and training room click on this tab and then click on join
training room correct okay first of all yes it it is possible provided for
example my container already has a bash installed in it which means as part of
the image the base libraries that you have chosen you have a bash if not at
this you will have some kind of basic shell so you can get into it and yes it
is possible you can modify any files configuration files but it is not
recommended for production for production it should happen through
config map you need to first modify the config map and then restart the
part so that the restart will pick the latest configurations but for
some urgent fixed purposes you can do it but by all means in kubernetes
it's not recommended because it's completely dynamic with respect to part
anytime kubernetes can kill the part for which you just modified the
configuration and they may create a recreate it may recreate and
replacement part so the best practice is modify the config map restart your
deployment at the part then it will fix the latest configuration but then
nothing will stop you if you are going to get a shell inside and
modify it provided you have root permissions and all those stuff or the
file modification permissions for those file and all the stuff yep welcome
four more minutes to complete this activity
cube CTL config get context one main constraint with respect to
examination is the time so you will really have a limited time to to
solve all the questions that they pose so we need to be really fast when
it comes to executing the commands and so on so on so right so keep that in
mind so for that we need to have these basic commands to be part of our
muscle memory then only we can easily have the commands where to go how to
see all those problems will be kind of easy only during examination but
whether you will be able to do it within the specified time that's what
most of the student fails during the examination time constraint time
constraint so during my hands on time also I will put you in and time
constraint right so that we can do it bit faster last two minutes please to
complete this activity okay that's and good question that's what we are
going to write the part specification using an declarative eml file and
wherein you can you can put specification for more containers
imperatively it's I mean using CLA it's bit complex we are going to
achieve it with an eml file yeah we are we are going to discuss that now
discuss that now right yeah yeah yeah no problem no problem
all right you can stop here because there's already time that we allocated
for that hands on so let's continue on the part topic right part is not it
over so we are yet to discuss many things within the part so what we
did we created a part using an run command and we learned the life
cycle commands right so same if I go back to the command where I created the
part using the run command in general we will submit this is this is
posting this to the API server right kubernetes API server so the cube CTL
is generating an eml file and then it's posting it to this rest API for this
for this command and equivalent eml file is generated so you can actually
write your own eml file putting together all the specifications because
here we are specifying image maybe there are some other flags like name
you can specify there are many configurations that you can specify
but you are limited to provide some configurations via CLA but if you are
going to write an eml file you can actually configure many properties and
then once you are done with your eml file you can apply this
configuration to the cluster on your own okay so for that you can start
start writing the eml file from scratch that is one way but you don't
want to start from scratch and instead you if you have some base you can just
build on top of it yes you can do that all you need to do is to the same
command just include dry run flag I want to dry run this and client and
output me the eml file you cannot put also in JSON file that doesn't going
to make difference output is going to be the same so what it means is you
are requesting the kubectl client to generate the eml file for equivalent for
this commands and then the options that you provided basically kubectl
won't submit this request to the server this will run in the client
side and then the eml will be printed for you so you can copy this and
then put it into your file you can put additional configurations and then
you can apply it on your own for example I can write this output to an
file my first pod dot eml okay so that file already has this contents
right so you can simply open the file and you can do the
modifications whatever you want so this is the place if you want to add
more container then continuous is an array I hope you are all familiar with
the eml formatting right two spaces two spaces two spaces under the label we
have two spaces and then its specifications so same way if it is an
array it will start with I fun so this is first con first entry of the
array you can define one more one more one more one more and each entry
is going to be a specification for each and individual containers right so once
you have the eml file ready and if you want you can make some updates and
after the update you can simply apply to the cluster so in the eml file you
can see API version is the top and the kind is specified as part because
you are going to create a pod resource and some metadata information
about the pod the name the label and then the spec specification and under
the containers your specification for your own container and then some
defaults that it included okay so with this file you can apply this by using
create command and specify the name of the file my first part dot eml that's
it if I press enter this is going to create the part with that one
because this is the first time we are creating the resource we are using
create command if the resource already exists and if you are modifying some
property then you can use apply command there are some properties that
you can modify and apply that will take effect but there are many
properties that if you are modifying you need to delete and recreate that
okay so apply will also create in this case there is no part with the name my
pod is running let's say then it will also do the create okay so let me
delete that part I think I have the part in the same name already no I
the name of the file and the same part that you created earlier with
imperatively okay so defining and creating an part from an from an eml
okay that is number one number two now if you go to the document cka kubernetes
working files just we are going to demonstrate couple of concept with this
with the file in this directory let me open a shell from here
come on
okay a lot better now so now for this demo sake we are going to use one
application this eml file this the eml file similar to what we just created
for our nginx application okay as you can see the kind is part and the name
of the part is card and this time it's based on one one one another
image and it has the specifications name as a card and the port inside
the container is 8080 and the protocol is TCP the name that we are getting
HTTP that doesn't matter even if you are going to keep this that's going to
work but a minimal part configuration as like how we created one for our
engine next right so if we want to apply this we know how to do this
apply the file and if you want to check whether it is created and
running to get parts and the part is running and if you want to access it
you know how to do it kubectl port forward the name of the part any port
number in the host machine and then the port within the container go to
okay yes in a simple react web application that we will use for demo
purposes right fine we deploy this application now on top of this
specification we are going to add more configurations or more specifications
okay the first one that we are going to add is related to health check
health check what it means let's say I have a container and this container is
running my application process and there are a lot of records that are
coming in and your application is processing it and it is sending some
success response back to the callers now here comes one request while
processing this request something went wrong in your application code and then
the application process exited process exited because it is the only process
that is running inside the container or the parent process that is
exited container has nothing to execute so this will also exit
container will also move to exit exist process move to exited state
container most exited state so for scenarios like this platform that is
let's say kubelet kubelet component or the kubernetes will react to it
because container status is exited kubernetes will try to restart restart
the pod or sometime it will try to create a replacement part so basically a
platform is taking an action when if the container is exited okay but the
same scenario it was it was working until a request came and this while
processing this request your application went into some kind of
stalled mode or it went into some kind of deadlock situation so which
the process is not exited because the process is not exited and running the
container status is shown as and running and kubernetes of course won't
take any action because it's already in running state but all the request
that goes after this request it's all getting some kind of error
responses to the user is a surfacing error because this service is no
longer returning any responses all the requests are getting timed out because
one of the previous requests put the application into an some kind of
stalled mode right we really want kubernetes to take an action in this
scenario but kubernetes doesn't care about this scenario because it
refers only the container status it is running the reason why it is show
is running showing running because the process is running the parent
running state but in reality the application is unhealthy it's not in a
position to serve requests so we want kubernetes to take an action if a
scenario like this happens so that's where the concept of health check
comes into picture so what we are going to do instead of relying only
on the container status we are going to take the health check close to
business code which means if this is your application that is exposing many
business endpoints let's say product endpoint search endpoint there are many
business endpoints that this application exposes correct in addition
to that you are going to write your developer will write a code to expose
a endpoint called health and calling this endpoint will simply return and
some sort of success code that's it nothing other than that if if it if
the request made to here then the response will be sent as an 200 okay
as simple as that so what Lewis will do is now while submitting the pod
yaml file Lewis will include a section called liveness probe wherein
he will specify this information here kubernetes this is the endpoint call
this endpoint every 30 seconds and expect a successful response if you are
getting an failure response for three consequent times that is a failure
threshold then consider that this application is unhealthy and consider
that the liveness probe is failed which means if a scenario like this
happens kubernetes already calling your health endpoint every 30 seconds if
a scenario like this happens then for the next 30 30 seconds kubernetes will
get an error response so threshold will meet so that time kubernetes will
understand that okay this particular container is become unhealthy so it
liveness probe fails the action that kubernetes takes is restarting the pod
assuming restart is going to be the fresh start of the application
everything should be work fine should be working fine after the restart
okay so this is the exact scenario where we want to address with the
health check and we are going to configure it as part of your pod
specification with the liveness probe and then you will specify all these
inputs once you specify it kubernetes is going to perform that probe okay so
if I open the file where we have the liveness probe configured okay I
think it will be better if I increase this font size font size okay so the
only new addition is this section compared to the previous yaml file the
new addition is this part live does probe it's for an HTTP endpoint healthy
and periods against is 10 which means every 10 seconds kubernetes is going to
call this endpoint and it will expect an success response and the failure
threshold is 3 which means for the consequent three atoms if kubernetes
receives an failure response from this endpoint then that's the time
it's going to declare or assume that this part is unhealthy okay we have
some timeouts against configurations also which means if the call to the
endpoint timeouts it takes more than one second that is also kubernetes and
failure initial delays again means what's the container spins up it's
going to wait for some five seconds before it starts with that health
check after five seconds every ten seconds it's going to call the
endpoint right so those configurations right so now you can see this in
action also let's say I'm going to delete the part that I created
earlier because the part that I'm going to create now it's with the
same name one two card board health cube CTL get parts you can see this
so now I'm going to do the port forward to access the application let me
refresh this okay we're already here just to demonstrate we are going to
simulate the scenario if liveness profiles how kubernetes is going to
act on that scenario right so let's say kubernetes is already calling this
application in this healthy endpoint every ten seconds right every ten
second if it calls this application is printing it in on a table okay you can
see already it's called and this application cell sent 200 response back
to the kubernetes right now we are going to intentionally send and
failure response to kubernetes fail for next three calls because three is
threshold that we set so if I click on three then we are going to send five
hundred and for the next call again an error code or a failure code the
third time it's going to send the threshold will hit and it's going to
be restarted okay as you can see already the part is restarted it's
waiting for the initial delay second of five and after that you will see again
the call happen from kubernetes it's going to continue for every ten
seconds so in this case if we simulated this behavior where the call
to the health endpoint failed let's say if this happened if all the
business endpoints are not accessible then the kubernetes will also be won't
be able to access the health endpoint also so which means we are taking that
liveness check near to your application code instead of relying
only on the container status or the process status we are taking it near to
the app code so that's the whole logic right so if I go back here you
can even see those informations in the describe command if I do keeps it
in get parts you can see the part is already restarted one time because it
liveness probe if you want to see those information you can look at the
event section
I think this is the card part right or the my part okay so here you can see
liveness probe failed with the status code 500 warning unhealthy and then the
part is restarted
okay so liveness profile sorry started so the learning from here is what is
liveness probe and why we need liveness probe this concept explains
that how to configure it you already seen an aml file if the liveness
probe fails the action is restarted that's it no complex access just a plain
restart of the container okay so this liveness probe as you can see here
from our application we exposed an API slash health endpoint isn't it what if
your application is an is an is not an API it's some kind of worker process
that doesn't exposes any API for those purposes we have different options
under the liveness probe we have HTTP probe that is the one that we just
seen we will expose an endpoint we also have TCP probe and we also have
TCP means you are going to probe to and specific port number as long as the
port is reachable it will be healthy healthy healthy port is not reachable
then it is an it will hit the unhealthy scenario exact probe means
you want to execute some command and that command execution should return
a success or failure code let's say 0 is a success code minus 1 is a
failure code after execution of the command every 10 seconds as long as you
the success code then healthy if it is a failure code then unhealthy right
so you will go for using this kind of probes where exposing an HTTP rest
API endpoint is not possible okay so you can see here this is using an
exact probe basically what this is going to do every five second it's
going to open this file temp healthy okay so as part of this file this is
creating that file and then slips for 30 seconds and then remove that file
sleep for five minutes right if you try this you can see also if it is not
able to find the file then it's going to be the failure scenario cat temp
health returns and failure code then the pod will be restarted as long as if
it is able to find the file then the thing will happen which means
application is healthy so a simple command so this is purely depends your
application team should know when we should tell that application is
unhealthy when we should tell that it is healthy based on that they are
going to write some exact commands here okay that is the exact probe this
is something that we just seen that is HTTP probe and the last one is TCP probe
will just specify the port number okay so which means the cubelet will
attempt to open a socket to your container on the specified part if if
it can establish a connection then they can consider containers healthy if
not consider the container is unhealthy okay three types based on the
application you can configure HTTP probe exact probe or TCP probe okay so this
is for liveness probe and if the linus probe phase the action is restarting
the container so similar to liveness probe we have one more called
readiness this serves a different purpose what is the purpose let's say
deploying nginx with five replicas five replicas which means five parts will be
running and here you have one another service that wants to call nginx
application your application my nginx application so which means you have
five instances the request needs to be load balanced to one of these five
so that one part will process it and send the response back to the
isn't it so for that we are going to create services resource in kubernetes
that we will discuss in a moment and this is the one that's going to load
balance to one of these five so in the readiness probe you are going to
specify the same config similar configuration as how we will specify
by linus probe maybe we will expose an endpoint called ready and you will
specify every 10 seconds that should be executed so based on that let's say
this container passes the readiness probe then this will be set to ready
ready state in this process ready state ready state ready state let's say
if this fails readiness probe it will be set to not ready state so by
the time when a request comes in this service component will consider only
the replicas that are in ready state that is one two three four and then
it will load balance to only these four this will simply be excluded because it
failed the readiness probe because it failed it is in not ready state so it
won't be considered for the load balancing maybe after a moment it will
turn to the next next calls it it may succeed it will success succeed the
readiness probe so that time another request comes in this time this will
also be considered because it's already transitioned to ready state okay so you
can have some kind of logic here to determine when your application is
ready maybe it's still processing the previous request or you don't have
enough sufficient thread pool to process the request so you may want
to fail the readiness probe so that you won't get any new request process
the one that that are already in in it and then once you complete it
then maybe the readiness probe will succeed and then it will get more
requests so when a liveness probe fails that is for a different case and a
part will be restarted if a readiness probe fails it just that that part
won't be considered for load balancing that's it once it transition to ready
it will be considered okay so that's what you can see under the readiness
probe I can show it yeah this one the same for the readiness also you can
specify the exact HTTP probe here it is an exact proof configured similar to
the liveness probe okay the only difference is instead of liveness you
are using the readiness so in our example if you go to the maybe the
you can see the readiness probe here same configuration is just that we are
calling another endpoint this may have a logic to tell whether their
application is in a position to accept the request new request or is
it still busy with processing the existing request good thing is for the
both liveness and readiness probe already the framework there are
some libraries that will take care of exposing these endpoints that will
determine or that will take care of exposing the endpoints with the
logics to say whether ready or not ready or your developer can take more
control and they can write their own logic by implementing this endpoint
okay but by all means now we are clear with this two probes and the purpose
of it and what will happen if these profiles any questions so far on this
any question or is it clear okay good so before trying the hands-on for
that part so in the in the in the simple part specification now we learned
how to add the two probes that is readiness and liveness probe now for
the same part specification we are going to add few more configurations
which are very very important right one is let's say I have Lewis and let's say
he is a beginner or he is a trainee he wrote on application and he deployed
the application into the kubernetes cluster have three nodes n1 n2 and
n3 and his application is running somewhere here in the node 1 and as
administrator hemos is already watching this application and he founds out that
this application has some a problem which means before an hour it was
using 0.5 CPU and 256 MB of memory now this application is using 1 CPU and
512 MB of memory and over the period it seems it's it started to use more
resources even though this application is sitting idle or it's receiving or
processing only few requests but then it keeps consuming more resources but
then it never releases the resources back to the pool once it's done with
the processing which means this application has some serious memory
leak issues and it's not releasing the resources so which means over the
period this guy may eat all the resources leaving all other containers
or parts to stow for the resources this was starving right so clearly as
administrator I don't want this scenario to happen if something seems
to happen like this then I want kubernetes to take some action so for
that you need to educate the kubernetes with some informations while
submitting the part specification you need to provide some information to
kubernetes ok what kind of information you need to provide resource
information resource limits you need to set the limits for CPU and then the
memory as a developer of this application you must know because you
must thoroughly performance test load test your application and then you
need to benchmark and come up with the values for CPU and memory limit maximum
CPU and memory your application will continue because you already load
tested performance tested and you should benchmark that value let's say
my application won't consume more than 1.5 CPU and memory more than 516 MB
let's say this is the limit you need to put it in the part
specification and if you apply to the kubernetes now kubernetes has this
information and it's going to keep an eye on your container so if your
container starts to consume more than what is specified on the limit then
kubernetes will decide that there is something going wrong with this guy
and kubernetes will take an action the action can be in restart or
sometimes it will delete and it will create a replacement container okay so
you will never get into a scenario of this guy eating all the resources
leaving others starving for the resources that is the limit resource
limit maximum so this is the case with Lewis who is a beginner he did
something and he ended up getting this advice from the Hermos who is the
cluster administrator and now he had deployed the application with this
input all good Hermos is happy Lewis is happy now we have another case
this time we have we have Jasper and what he did this he is an expert and
he wrote some kind of machine learning algorithm and he want to run the
application in the cluster node 1 node 2 and node 3 and he submits the
request and the application the container or the part scheduled to run
in the node 1 but then this part is not starting up it's still not showing
running status it's in some error status and upon analysis Jasper found
out that for this container to start up and work it minimum requires 0.5
CPU and 256 MB of memory but then in this node you don't have that
resource let's say you have only 100 MB of memory or you don't have you have
only point to CPU okay so which means and this particular application has
some minimum resource requirement for it to work minimum resource
requirement both on CPU and memory that how kubernetes will know there is
no way that kubernetes will know that unless you provide that
information to kubernetes as a developer you should know it and you
provide that information to kubernetes so that while scheduling scheduler will
consider that while scheduling scheduler will look into your part
specification and it will understand that okay this is the minimum
requirement and it won't consider the n1 because n1 doesn't have that
resource so maybe it will consider the end to entry that will have
enough resources to to run that container okay so that is the
minimum resource requirement that that that is required for your application
to work so that you can specify it under resource request CPU limit 0.5
CPU 256 MB of memory so resource requests resource limits now you know
why we need those two so here you can see resource request and resource
limit okay so as I mentioned this is an time-consuming activity you need to
perform developers need to perform load balancing and then they need to
load testing performance testing and they need to come up with these
values but then if you are running the applications on the cloud we have
something called VPA vertical part auto scale what it means if you
configure VPA for your application part then the VPA will observe the CPU and
memory utilization of your of your application container it will observe
the values past values and it will recommend the request and limit for
CPU and memory this VPA component will recommend what to put here instead of
you are performing a manual benchmarking activity it will observe
your application over a period of time and based on that hey just set this to
this this to this this to this value limits and requests it also provides
an capability where it can automatically set the values also it
can it can recommend and if you needed you can update it if you configure it
bit more then this VPA will automatically set the request and
limits for both CPU and memory time to time both options are possible with
VPA why I need to do this because it's very important for you to
right size your application you need to right size your application it
shouldn't be it should have enough resources for it to work and it
didn't be eating more resources than what it is required for it right so we
need to right size the application right size our application in the
cluster that is more important okay so every specifications that are submitted
to the cluster you can demand or you can mandate their applications team
to have this information if not you can simply reject the request as a
admission controller layer level where you can simply reject the request that
comes to the cluster without resource request and limits okay and you can
even automate this if you are in the cloud you can make use of VPA where in
it will recommend or you can configure it in such a way it will
automatically set the values how it does it it has access to the matrix
of your application based on that based on the historical values it
okay I hope it is clear please let me know if you have any questions before we
go for a lunch break I'm going to stop here if you have any questions I will
address it if no then we can go for a lunch break okay after the lunch
break I will give some time for you to try this no worries is it clear so
any questions perfect vertical part of scaling up all right time is now 12 20
so let's take a 45 minutes lunch break and be back by 1 5 p.m. thank you thank
you for listening
welcome back welcome back please raise your hands in the teams if you are
back to a desk just a quick attendance check okay since we are ready to okay
perfect perfect good we got everyone back I hope you had an great lunch
time so now it's time to continue our discussion right so I'm going to
increase the pace of the course a little bit faster right so fine so what
we completed before the lunch break is we know we now know how to create
and part using and declarative ml5 creating and part from ml and we
also learned a couple of probes we have different ways to implement it one is
using HTTP I think I'm using a wrong pen it should be the pen from me know
under the parts we learned how to create it with an email under the
probes we covered different types that is HTTP probe exact probe TCP probe
there are two different probes that we discussed one is liveness second one is
readiness if liveness probe fails action is restart if readiness probe
fails action is don't consider that specific part for load balancing right
and then from the resources perspective we learn the importance of
both limit and request this applies for both CPU and memory and in align with
that we also learned vertical part auto scaling right to automatically set
values for this or get a recommendation for for the values to be
set on this limit and request correct so these are all the things
that we discussed just before our lunch break right so there are a couple of
other configurations let's talk about that and then I will give some time
for the hands-on so that you can try it all once in one single example
right so we know we create an container from the images we have
images this access and template which means if you create a container this is
going to get its own file system which means every container will have
its own file system if you have container to it has its own file
system isn't it so let's say we have velocity and she tried to spin up a
for customer database so she choose and postgres image and she is running and
container are a part based on the postgres image and it's running a
database and there are many applications that are writing or
reading the data from this database so this postgres process is going to
all the data in its file system let's say we have a directory called data
within the container in a specific folder data you have all those
millions customer records information about the customers let's say and this
is working fine what happens is here we have hair moves and he is an
so what he does is he simply deletes this container let's say mistakenly he
deleted it so the moment when he deleted the container there is no undo
for that which means if you delete a container then the file system
associated with the container it all will also get deleted which means
you will lose all those million customer records that were in the data
folder because you deleted the container it goes to any country doesn't need
to be a database let's say you have an in backend API that generates some
kind of XML file in a directory for every requested process and if it is
going to store all those data in this XML directory then if someone
deletes the container you will lose everything that is there in the
right so clearly this is not the scenario that we want to face
basically here the requirement is I have something inside the container
that I want to possess outside of container lifecycle even if the
container is deleted I want the data in this directories to be safe okay
so that is the use case for volumes that's the use case for volumes this is
the problem that volume concept tries to address so the idea here is simple
you have your velocities postgres container and all the data is returned
to the data directory this is the directory you want to possess so what
I can do is I can choose any location remote location or a local location
let's say this postgres is running in an host machine host one I can choose a
path from the host machine let's say there is a path called lax or there is
a network file share I have a directory here or I have a GC
persistent disk some some some storage from a cloud or I have azure this
it can be anywhere so what you can do is you can mount this folders or this
directories are these volumes you can create a volume in an in an host
machine path or you can create a volume in an azure disk or you can create a
volume in a GC persistent disk or create a volume in a network file
share once the volume is created then you can mount that volume to an
specific location inside the container so let's say here I am mounting this
host machine path to the data directory within the container so which means
whatever the process writes to the data directory it's actually getting
stored in the lax directory in the host machine so which means if this
container is deleted the data is still safe in the lax directory okay
so that I can spin one more new container with the same volume mapping
so that the new container can see all the data that is left by the previous
container okay so by this way we are persisting something outside of
container lifecycle it can be one directly or you may have multiple
directories that you want to purchase this directory to an NFS this
doesn't matter based on your requirement you are going to store it in some some
remote data center or in cloud storage okay so now we know the use case for
volume and then the problem it addresses correct how to specify that
volume in your part specification that is our next question in the part
you have multiple containers at the container level you are going to
specify all this liveness probe resource request limits even the volumes okay so
using the volumes is a two-step process first step is you need to define the
volume and I say define you are going to give a name to the volume and then
you need to specify the target destination or location where you want
to create the volume in the host part RNN cloud provider storages so
that type provider type you need to specify right once the volume is
defined you need to mount the volume inside the container okay so how to do
that the step number one is defining the volume isn't it so in the same
intention as containers you can see we define the volume the name of the
volume and then where you are going to create the volume in a network
file share this is the server name this is the part in that NFS you can
even provide more configurations like how much space you want to block and so
on so details but keeping it simple this is how it's going to be in a
network file share you are going to create a volume with the name as an
core data remaining properties are defaults that's going to get applied
right so step one is defining defining the volume and step two is
mounting this volume inside the container path or a directory so that
you can see it under container specification under containers we have
volume mounts and here you are specifying the name of the volume and
then the mount path this is the path inside this container that is in this
data directory it will get mounted with this volume so which means whatever
the application writes to the data directory it will go all the way to
this network file share export directory okay defining the volume is
part one and using or mounting that volume to a path inside the
using volume mounts is the step number two okay so later on day two we are
going to see how we are going to refactor this logic to move to
persistent volume and persistent volume claims what is the what is the
problem or the challenge with this way of defining the volume compared to
using PV and PVC right but pretty much this does the job using volume and
okay so if you look at and production grade what specification these are all
the things that you will always see one is the the image specifications the
application specific things and then resource request and limits liveness
probe readiness probe volume mounts we are yet to see secret mass and
you will also see many specifications related to injecting some
configurations right that's it so this is for the NFS I don't think I have a
server with this name you can also replace it with the host path let's
say you want to mount a path from a host machine then there is one type
called host path and then to the type you are going to specify the path in the
host machine let me open this file yep as you can see here it uses and type
as an host path the path in the host machine and then mounting that volume
to the data directory okay there are many supported storage provided types
that again we will discuss while we talk about persistent volumes right but
now we are clear with the workflow what it all takes to create a volume and
then mount it inside a folder okay so with this we now know in addition to
this resource request and limit what it all takes to configure the
volumes defining the volume and then mounting that volume inside a container
those two steps okay please feel free to stop if you have any questions so
now I think now we are getting more clarity on what all the things that
we can do at the pod level itself isn't it so if you still remember I
mentioned about multi container part there are some use cases for
multi container part so let me talk about a bit here right so the first
scenario that I'm going to talk about is unit containers this is there to
serve a specific purpose for example this is your main container a
container that runs your business application and we will also run one
or more unit containers the purpose of the unit container is as the names
it's for executing some initialization logics or in other terms setting the
stage for the main container to run so based on your application you may have
some complex initialization logic like mounting and volume or creating
checking some dependencies there are a lot of initialization logic that you
want to check so instead of having those initialization logic as part of
main container you can have it in the unit containers you can have one or more
unit containers and for the containers we define like containers and
then we define the first container right similarly for the unit
containers we have and separate attribute like the same indentation as
continuous you will defend any containers and you will give the first
unit container second unit container third unit container like this so it's
not like you will put it here that's that's that's meant for different pattern
that we are going to discuss like sidecar you will run it but in it
containers it has its own life cycle which means if you have it in it
container defined it will execute in the same order that you defined here
one by one first this container will execute it will run to its entirety
and then it should exit with an success code completely it should execute
successfully maybe it has some set of commands to execute and then the
second unit continuous but must execute successfully third unit
container must execute successfully if all the unit containers completed
basically is going to have some short lived set of commands to perform if
it all successfully executed that's the time the main container will start
because the stage is ready now the hero can enter right so if it's one of the
initialization container fails then the entire part will be restarted again
from first container second category third category execute so that's the
behavior of unit containers let me include this links in the etherpad so
that it will be easy for you to refer in it containers eight containers
always runs to completion each unit container must complete successfully
before the next one starts and this unit containers it doesn't support
liveness probe readiness probe bar so on so because it's it's not the long
running process this is that to just set the stage for the main guy and
the way to define it as you can see here as like containers we have any
section this application has two unit containers and here it you can see
here it checks whether the my service my service is already available in the
cluster so it's going to wait it's just checking one of the dependent
service is already up and running or not this is checking for the database
this is checking for a service if it is already available then this will
succeed this will succeed then the main container will start maybe the
main container is as part of his initialization logic it may want to
communicate with my service and my DB so if these two are not up then if it
spins up it may fail so in this case they are using the unit containers to
make sure those two services are available and then it does right so
this happens only during the startup time once unit containers are
executed and started up and the main category started up then it
won't execute again maybe in the next to start it will execute okay so which
means after main container is up and running if the my service container
goes down then this this this is not responsible for that okay I think you
got that idea right so this is just for the initialization logic as
okay so I also mentioned about sidecar patterns right car sidecar cycle that
is the given as documentation okay that's fine I can explain it from here
so this is one case for multi-container part other cases in the
containers array you are going to define the first container and then the
second container here there is nothing like an order like this should execute
and this should execute both will run parallel this is going to be a long
running process this is also going to be a long running process this is the
specification for main container this is the specification for a sidecar
and this is here to provide some helper functionality one of the
functionalities you have a main container and you have let's say and
logging back in and your application is generating a logs in an ABC format
but you are logging back and expects the log in an XYZ format so what I
can do is I can run a sidecar maybe the main application is writing to an
volume and this sidecar can read the log from the volume transform the
ABC format to XYZ format and it can even forward the log to the logging
packet so in this case this sidecar does two operation one transforming the
logging from ABC to XYZ format and then forwarding the log to the logging
packet so this is just one helper functionality so in this case what
does is an adapter pattern so this is an adapter sidecar because the target
system experts in a different format the source generates in a different
format so this access and kind of adapter between these two okay one case
where we will use the sidecar and there is another case let's say the
same main I have and here I have an external and web service from my
party vendor and dealing with the external web service which means there
is a separate authentication mechanism if service is not available retry
patterns circuit breaker patterns and implementing some kind of fallback
mechanisms there are a lot of things that I need to handle at the main
container to call this service and working with this external service so
instead of having all those logics in the main container how about creating
an sidecar and offloading all the responsibilities to here so that as a
main container I will simply call it as an local host as like any local
method I will call this method and this is going to take care of dealing
with this external service all the authentication retry circuit breaker
logic it's all resides here so in this case this is kind of acts as an
proxy this sidecar acts as a proxy for this external web service so this
pattern we call it as an ambassador pattern this is also providing a
helper functionality but the cases are different for these two scenarios
okay and always keep in mind that if you are going to schedule a part let
say velocity she is going to schedule a part with two containers or
containers one main and two sidecar and in the kubernetes cluster if if this
part is scheduled to run in a node number two then all the containers
will run in the same node only that is the characteristics of part right
because all these containers can reach to one another via IPC inter process
communication which means as can call this as an local host and then hit
the port number dish to it this can call this as a local host right so
which means all the containers in that part will always land in the same
node it not like see one will run here see two will run here see three
will run here that is not the case all the containers within the part
specification will always land in the same node okay just keep that in
mind all right so fit these discussions are part topic comes to an end we deep
dived into the part we we learned the part lifecycle commands different things
that will be there in the part specification and then the use cases
for multi-container parts with this I am going to give a quick pass you can
directly try the 1 6 part fully aml file that has all the concepts in it
you can try it or you can spend some time to go through the specifications go
through the documentations I'm going to give you here a 10 to 15 minutes let's
say after that you will move to the next resource that is replica set and
deployments is this clear guys are you good with the pace is it all
clear I need few confirmations please okay thank you please okay go ahead
please                

on 2022-11-21

Visit the Certified Kubernetes Administrator - Exam Preparation course recordings page

4 videos