1:21
2022-10-26 10:40:47
48:29
2022-10-26 11:22:27
1:32
2022-10-26 14:03:58
22:02
2022-10-26 14:09:12
18:32
2022-10-26 14:38:39
1:26:59
2022-10-27 09:43:27
2:08:07
2022-10-27 13:00:05
5:19
2022-10-28 09:57:45
Visit the Certified Kubernetes Administrator Exam Preparation course recordings page
United Arab Emirates - Certified Kubernetes Administrator (CKA) - exam preparation
WEBVTT--> i think it was a mistake on that command francoise you don't need to specify part --> keep ctl port forward my engine next you don't need to give to give the part that --> it was a mistake on my document there apologies for that i just noticed it i will correct it --> all right --> i could see most of you already completed maybe i will give some additional three more --> minutes i could see some few busy hands still --> all right time up time up for this activity if there are no questions let's discuss further --> on the same part commands any questions perfect so what we did is we created a part --> imperatively which means with cli command run command we created it but now what we are going --> to do is while you executed the cube ctl run this cube ctl client it created an equivalent --> aml file for your command and it submitted that aml file to the cluster to the minikube --> cluster so what we are going to do now we are going to write one aml file from scratch let's --> say and then we are going to submit this aml file to the cluster with the help of cube ctl --> where we have more control on specifying a lot of specifications okay so instead of --> creating something from the scratch we can actually seek the help of cube ctl cli hey can --> give me some the aml file that you use to to for this specific command let's say the same command --> cube ctl create pod where is it --> this one cube ctl run image nginx my part so if i press enter the request will be --> submitted to the server and the part is created immediately correct so instead of that --> i can actually add some couple of flags here if and if and dry run --> dry run equals client and i want the output in an aml format so this is the new flags --> that we are adding dry run as a client and output it as an aml so if i press enter --> cube ctl won't submit anything to the server side it just give princess the aml file here in our --> terminal so if i do cube ctl get parts that's the above command didn't created any parts --> because it's it's the command doesn't reach the server it was just a dry run on the client --> side where we got the aml file --> okay so you can have this as a base --> you can do whatever the modifications that you wanted to do for example i am going to --> create a folder here --> documents --> sample --> so let's say here i'm going to write the content this is pretty much the content that i'm going --> i'm going to write this to an file and i can open the file in an editor or any editor of your preferences and i can do what are the modifications i need to do let's say i'm going to do some modifications on this yaml file let me increase the font size a bit --> okay better compared to the previous one so here there are some properties maybe in all properties i can remove to keep it simple this data dns policy let me come up with an minimal --> yaml specification that's it apart as you can see the api version where this the controller can handle this resource and the kind this part which means you are going to create a part if you are going to create a deployment kind will be deployment --> so here you are specifying the resource that you want to create and then the metadata for the resource which means the label that it should put on it the name of the part --> and then for the part you need to depending a specification under the specification you have containers which is in plural so you are going to give array of container specification but we have only one specification here let's say i'm going to call this container as in my container --> okay so this is an very simple basic part specification so you can write from scratch or you can or any for most of the commands you can do the dry run client flag and output there is an email and then use that as a base to create your own email for your applications --> okay so once you have the email file ready then all i need to do to apply this is cube cdl create if any of name of the file that's it you can just apply or create both does the same job if this is the first time you are creating the resource then you can just create if a resource already exists and you are doing an update and then applying you can use update --> apply command if i use apply and the resource doesn't exist apply will also do create okay so you can use anything create or apply so let's say i'm doing cube cdl create and this is going to this already created a part with the specification if i describe it you will see the container with the name icon maybe i can view the logs cube cdl logs my part can see that --> you can also specify the container name --> because it is just one container it printed the logs of it okay you can delete also with the file cube cdl delete if any of --> which means based on the specification that is there in the email it will check whether in the current infrastructure you have a matching object if that object is found it will be deleted --> okay so what we just learned creating an pod resource from man declarative specification how we created the email file the in the indentation in the email file is important two spaces and below labels we have key value pair that should come as after two spaces --> under the specification two spaces containers if it is array of value I fund and then then this should come there so that is important so as long as you are following that then your email file will work perfectly fine if there are some problem with the email file then while doing the create or apply it will --> throw some errors saying some problem with the syntax blah blah blah --> all right so creating and part specification by declaratively --> so here in this case this part specification is very very simple and straightforward now we are going to discuss how an typical production grade part specification will look like what all the additional things that you can find in any part specification --> okay so let me go through that one by one the first thing that we are going to see is --> it's meant for an specific use case I am going to talk about the problem scenario and then how it is implement how the solution is implemented in the kubernetes world that is the part two of the story part one of the story is if I --> have a container running it's I have a pod running apart with one container --> and that is running one application process maybe your microservice or nginx process is running on process it's already processing lot of records that are coming in a lot of records that are coming in its processing it --> what happens here comes on request while processing this request number 97 let's say something went wrong with your application process --> it crashed and then the process exceeded because it was the parent process was the only process let's say it was running it exceeded the container status will also --> will move to exited state as the processes needs to exit and then the container has nothing to run so container status will also move to exited state so if a container is status is exited then the platform will take an action --> platform means here I am referring to kubernetes kubernetes will take an action to restart that container or in some cases it may create and replacement for this --> when it is taking that action when kubernetes is taking that action it's observing the status of the path if the status is exited then it is restarting it when it moves to exited if the process exits --> ok vanilla approach straight forward here nothing complex here how about I have the same process running similarly it's executing many requests and here comes one request while processing that request --> your application went into some kind of stall mode or it went into some kind of deadlock situation which means whatever the request that goes after that it's all getting failure responses --> none of the access succeeds after this request all the users are getting error but in this scenario process is not exited it's still in the running state --> but it's not in a position to serve any request because it's in some kind of faulted mode let's say because this is not exited container status or part status will be showing up --> and platform won't take any action because from the platform's view container status is up --> but from the user's perspective the applications are not not at all accessible so this is a scenario this is the problem case where we really want kubernetes to consider or to identify these scenarios as well and take an action like restarting it or creating and replacement mostly it will do the restarts only restarts is going to be a first start of the application and that should work perfectly fine --> so instead of relying kubernetes relying only on the part status to determine the health of the application to determine the health of the application instead of relying on the part status --> we are going to take that to an one level deeper instead of checking on the part status we are going to do that health check at the application level --> which means this application is exposing some business end points right maybe if it is in product search service it's exposing some business end points in addition to those end points your developer will write a piece of code logic to expose and health end point --> and the logic in the end point can be as simple as if someone calls this end point send then success code 200 ok code as simple as that so this piece of logic resides close to your business code so once your developer exposes this end point then while submitting the yaml file what even will do is --> in the part specification yaml file that we just did one right there he will include one section called liveness probe and then he will specify the end point he will specify the interval with this information if he submits to kubernetes what kubernetes will do is --> it will call this end point every defined interval which means every 30 seconds kubernetes is going to invoke this end point locally --> and then maybe the first time it calls your application return 200 back to kubernetes so kubernetes won't take any action after 30 seconds --> again your service return 200 back to kubernetes service is healthy healthy healthy let's say there comes a time where let's say at the 10th 10th time I mean after 300 seconds or so your service returns a failure response which means kubernetes call the health end point assume this scenario happened --> and your health health call comes after that so as like users are getting error you are also going yes is that a question ok no as like how users are facing error this kubernetes will also get an error response --> something like that and then again after 30 seconds again an error after 30 seconds again an error so here in the configuration even will also mention the failure threshold how many subsequent failures failed responses I can consider as an failure to declare that service as an unhealthy so let's say the threshold he specified is 3 --> so that's the time kubernetes will take an action so based on the specification provided by event kubernetes senses that this service is unhealthy and then it will immediately restart that service ok the action there is restart if your application fails liveness probe that's part will be restarted by kubernetes --> so that restart is going to be refreshed out of the application where you won't have this scenario it will look perfectly fine ok so that is liveness probe so in any application you will see liveness probe configured for the parts open the first sample the second one let's say --> so as like nginx let's say this is your application image this is a sample web application that we use for demo purpose it can be any image your application image that you event build or nginx use anything and as part of that application logic you must expose an end point --> ok so we provide that under the liveness probe section liveness probe healthy end point it can be an htp probe if your application is is is not exposing any apis it can be an pcb probe tcp probe or it can be execution of some commands exact probe there are three probes that you can do one in this application you can call an htp get htp end point --> or you can do a simple tcp probe or it can be an exact probe you have some shell commands that you want to execute that's going to determine whether application is healthy or unhealthy you can put that command here exact probe in this example we are using htp probe --> and then we are specifying initial delay second which means if the application starts don't do health check for first five seconds because it may be initializing that time and then if the call to the end point health end point doesn't return any response within once again consider that as a timeout timeout is also considered as a failure scenario --> interval every ten second this end point will be called by kubernetes and subsequent three failure means application is unhealthy you can consider that liveness probe fail if you if you get subsequent three and sequent three failure responses on this end point --> okay so let's see this in action let's see this in action right so before that the sample application that we are going to use this card application if i open card part you can see the specification similar to the one that i just shown it's just that the image is different and then the container port is using 8080 or nginx was using 80 that's the only difference okay let's let's execute this let's apply --> this configuration and let's view the card application that we are going to work for this demo --> okay --> i think it's a different folder let me check --> the downloads are documents --> okay --> it's in document ck hands-on within that right --> come on --> open in a terminal --> kubectl --> apply if nf 1 1 --> card part ml and check whether the part is already running --> still creating --> still creating --> and put a watch flag and you can watch it --> okay it's running now it's an simple react web application let's see how the application looks like kubectl port forward --> name of the part --> any port number in the host machine and then the port within the container --> i am going to access this application localhost 8082 --> come on --> there is a delay for a split second --> for me --> come on i'm trying to access this --> localhost 8082 --> yep we are able okay --> i think it is serving from the cache --> yeah there it is so this is the simple application and --> and --> it has many tabs that we will use for demo sake okay but now let me go back here and stop the port forwarding --> let me try to refresh this once --> okay i stopped the port forwarding and i'm going to delete this part so that i can create the second file where i have the liveness con liveness prop configure --> because that part is also going to be in the same name so i'm deleting that kubectl apply so in this aml file we have the liveness prop configure --> i'm going to do the same set of steps --> one spot created i'm going to check whether it is running it is running to the port forwarding --> and open the browser --> browser --> refresh it got it right --> so here as is a demo application every time when kubernetes invokes the health endpoint as port office liveness probe --> it's getting printed here as you can see it's printed in this table you can see kubernetes is calling this application and this application is returning 200 back to the kubernetes --> okay so now we are going to intentionally fail the liveness probe to see how kubernetes react basically it's going to restart but we are going to see that so here we are going to fail for next three calls because three is the threshold --> so which means for the next three calls this application is going to send 300 500 500 some error response to the kubernetes --> and kubernetes will immediately --> will take an action because this application failed the liveness probe it will take an action the action in that case will be restarting that part --> as you can see it's already --> restarted you can see the port forwarding and everything stopped behind the scene --> if i go back there --> and if i try to do kubectl get parts --> you can see the restart count is already updated as an one thirty three seconds ago --> you can describe this part --> to view more details on the event section here it says --> liveness probe failed --> reason is unhealthy --> HTTP probe failed we set as for 500 --> container failed liveness probe and it will be restarted --> and then the container restarted --> okay --> so --> to do that if if your application is unhealthy it's it will it can do the restart but for that you must have the liveness probe here --> or else if a scenario like this happens --> then kubernetes will be of no help here because kubernetes doesn't even aware of the fact that its users are getting error --> you need to declare the application as unhealthy for that we need to expose an health check endpoint --> if business endpoints are failing then the health check endpoint will also fail --> that will notify the kubernetes about it because it's constantly pulling that endpoint --> by that way it will be restarted --> any questions on the liveness probe before i give the stage for you to try --> is this clear liveness probe --> yes no --> perfect perfect --> so these commands are also there in the --> pdf --> or you can even try this set of commands that i tried i will put it in the etherpad for your reference --> okay launch the application go to liveness probe tab and then you can simulate that error scenario and see how kubernetes behaves --> all right --> so we have --> around seven minutes ten minutes for our lunch break --> so let's take this ten minutes to try this hands-on on liveness probe --> as well as the hands-on on this part to create and part specification declaratively --> let me include this command --> let's try that and then the liveness probe and then you can go for a lunch break --> and then be back by --> one pm --> is that okay for all you can you can go for lunch and then at the end you can do this exercise up to you i'm leaving it up to you --> to do an working lunch to complete these commands --> any questions before that --> all right --> yep --> i'm going to drop from this call i will be back by 12 55 --> so have a great lunch --> and see you guys back by one pm thank you thank you for listening --> hello everyone yeah hello hello --> yeah so is it is it obvious on my face let me turn off the camera --> never just kidding --> okay so --> yeah okay --> i'm good but it just that my stomach is not good --> okay other parts are working fine --> okay so i just seen you all your chats and then sergeant said chat buzzy can you would you like to summarize that --> if you don't mind --> yeah yeah please --> uh-huh --> uh-huh --> uh-huh --> uh-huh --> uh-huh --> uh-huh --> each end we have --> okay we have only one end point right by the end point --> okay --> uh-huh --> uh-huh --> uh-huh --> uh-huh --> okay --> uh-huh --> uh-huh --> uh-huh --> Mmh, mmh, mmh. --> Mmh, okay. --> Mmh, mmh. --> Mmh, mmh. --> Mmh, mmh, mmh. --> Mmh, mmh, mmh. --> Mmh, mmh, mmh. --> Mmh, mmh. --> Mmh, mmh. --> Mmh, mmh, mmh. --> Mmh, mmh. --> Mmh, mmh, mmh. --> Mmh, mmh. --> Mmh, mmh, mmh. --> Mmh, mmh, mmh. --> Mmh, mmh, mmh. --> Mmh, mmh, mmh. --> Mmh, mmh, mmh. --> Mmh, mmh. --> Mmh, mmh. --> Mmh, mmh. --> I don't think we can have those kind of data in the New Relic backend. --> Do you think? --> Number of transactions received. --> Okay. --> Successfully processed. --> Correct. Correct. For that we don't need. --> Total amount of transactions received. --> Number and amount. --> Total amount of transactions successfully processed. --> Number of transactions is the count. --> Total amount of transactions. --> Total amount. --> Number. --> Yeah, but we need to, we can check with Circon on this. --> Because there are many amount, right? --> Charged amount or the delivery fees there. --> Is it going to be a combination of charged or delivered? --> Those things we need to clarify. --> Total amount of transactions received. --> Amount of transaction. --> What do you mean by amount here? --> Transaction successfully processed. --> I feel, I feel 0.12 and 0.34 are same. --> But we can better check with Circon. --> Is he available? Is there? --> Maybe we can drop a message to him. --> Asking what this amount of transaction means. --> Okay, I will ping him right away. --> Okay, and then the fourth one. --> Success rate equal number of transactions successfully processed. --> Divided by number of transactions received. --> Correct. --> Yes, we don't need it here. --> Error distribution. --> Which means this is from the error. --> So on the error event. --> Maybe here we need the JSON. --> We need to capture the type of errors. --> Let's see. --> Some of the errors. --> Something unavailable or some invalid input. --> There may be different kinds of error that we see. --> Correct. --> Correct. Correct. --> I understand for few SAP endpoints. --> They are returning some standard error format. --> But for others they are returning some random error messages, right? --> So. --> First we need to group it by error code. --> Status code. --> And then if possible in the JSON if we can also include the error message. --> It will be. --> Easy to identify the problem. --> Like we sometimes used to get that. --> If SAP is down we get some errors. --> Sometimes error for not found. --> Yeah, so in the error distribution we need to have a JSON. --> But for the others. --> We don't need a JSON. --> And we need to check whether the amount means whether it is the field that they are referring. --> Or not. --> I just let me give me a minute. --> Let me go ahead. --> Go ahead for the total amount of transaction. --> Yeah, received I think for the result we can. --> Received S is right. --> We already sending some APM data to the new relic. --> So received we can easily get it. --> But for successfully processed we need to send a custom even because successfully processed means. --> The SAP call. --> Should succeed and then on the success spot we need to capture that correct. --> So received received part. --> Yes, we can directly do that. --> But successfully processing or we need to do in custom event only. --> So what you are saying is you want the event type to be foot page journal entry. --> And then the attributes. --> Array will have. --> Let's see. --> Status. --> Queued. --> And for the different different status it's going to have different. --> And I can have one more attribute if I want. --> If I want to include some. --> Inputs like a JSON. --> Then I can have another key and then the object I can put the JSON. --> OK OK OK. --> So for the endpoint level we can have the grouping. --> Performant. --> One event four states are four events. --> That's that's what we need to determine. --> Right. --> So if it is going to be just one even four states. --> And we are going to query the same event for all different. --> Visualizations let's say. --> Same event for all different dashboards are visualizations. --> Will it have any impact on the performance. --> Even when you OK. --> And having a different events. --> Having one event capturing all the attribute values. --> OK I'm just seeing some blocks from the new relic. --> Yeah go ahead. --> Yes it's on the task success. --> We we yeah it's in the task description and. --> We think that it's about. --> Success response from SAP. --> We thought it. --> Mm hmm. --> Mm hmm. --> OK but what is the difference between one and three. --> Number of transaction received. --> Total amount of transaction received. --> Both are same right. --> OK process. --> OK OK OK. --> Yeah yes. --> Kind of lubrication. --> but i really doubt about this amount aggregation on on our integration side because this is --> something that directly vertical can provide we are just passing the data while passing --> the data we are capturing and showing the amount i am not sure whether --> by amount circumment this or the quantity he mentioned as an amount --> i already asked a question to him in the chat but but let's see --> okay --> okay