8 videos 📅 2022-10-26 09:00:00 Africa/Blantyre
1:21
2022-10-26 10:40:47
48:29
2022-10-26 11:22:27
1:32
2022-10-26 14:03:58
22:02
2022-10-26 14:09:12
18:32
2022-10-26 14:38:39
1:26:59
2022-10-27 09:43:27
2:08:07
2022-10-27 13:00:05
5:19
2022-10-28 09:57:45

Visit the Certified Kubernetes Administrator Exam Preparation course recordings page

United Arab Emirates - Certified Kubernetes Administrator (CKA) - exam preparation

                WEBVTT

00:02:03.020 --> 00:02:08.960
i think it was a mistake on that command francoise you don't need to specify part

00:02:10.320 --> 00:02:15.600
keep ctl port forward my engine next you don't need to give to give the part that

00:02:16.220 --> 00:02:22.480
it was a mistake on my document there apologies for that i just noticed it i will correct it

00:03:25.720 --> 00:03:28.520
all right

00:03:28.520 --> 00:03:33.880
i could see most of you already completed maybe i will give some additional three more

00:03:33.880 --> 00:03:38.060
minutes i could see some few busy hands still

00:04:13.120 --> 00:04:21.460
all right time up time up for this activity if there are no questions let's discuss further

00:04:21.460 --> 00:04:28.300
on the same part commands any questions perfect so what we did is we created a part

00:04:28.300 --> 00:04:34.400
imperatively which means with cli command run command we created it but now what we are going

00:04:34.400 --> 00:04:42.620
to do is while you executed the cube ctl run this cube ctl client it created an equivalent

00:04:42.620 --> 00:04:49.280
aml file for your command and it submitted that aml file to the cluster to the minikube

00:04:49.280 --> 00:04:56.200
cluster so what we are going to do now we are going to write one aml file from scratch let's

00:04:56.200 --> 00:05:02.480
say and then we are going to submit this aml file to the cluster with the help of cube ctl

00:05:03.600 --> 00:05:10.240
where we have more control on specifying a lot of specifications okay so instead of

00:05:10.240 --> 00:05:18.180
creating something from the scratch we can actually seek the help of cube ctl cli hey can

00:05:18.180 --> 00:05:26.260
give me some the aml file that you use to to for this specific command let's say the same command

00:05:26.900 --> 00:05:31.540
cube ctl create pod where is it

00:05:33.080 --> 00:05:40.700
this one cube ctl run image nginx my part so if i press enter the request will be

00:05:40.700 --> 00:05:46.180
submitted to the server and the part is created immediately correct so instead of that

00:05:46.180 --> 00:05:52.340
i can actually add some couple of flags here if and if and dry run

00:05:54.880 --> 00:06:05.320
dry run equals client and i want the output in an aml format so this is the new flags

00:06:05.320 --> 00:06:11.140
that we are adding dry run as a client and output it as an aml so if i press enter

00:06:12.900 --> 00:06:19.940
cube ctl won't submit anything to the server side it just give princess the aml file here in our

00:06:19.940 --> 00:06:25.420
terminal so if i do cube ctl get parts that's the above command didn't created any parts

00:06:26.560 --> 00:06:32.080
because it's it's the command doesn't reach the server it was just a dry run on the client

00:06:32.080 --> 00:06:35.980
side where we got the aml file

00:06:38.480 --> 00:06:41.200
okay so you can have this as a base

00:06:46.420 --> 00:06:51.520
you can do whatever the modifications that you wanted to do for example i am going to

00:06:53.460 --> 00:06:54.880
create a folder here

00:06:57.980 --> 00:06:58.540
documents

00:07:04.700 --> 00:07:06.100
sample

00:07:29.720 --> 00:07:36.080
so let's say here i'm going to write the content this is pretty much the content that i'm going

00:07:36.080 --> 00:08:02.080
i'm going to write this to an file and i can open the file in an editor or any editor of your preferences and i can do what are the modifications i need to do let's say i'm going to do some modifications on this yaml file let me increase the font size a bit

00:08:09.860 --> 00:08:27.080
okay better compared to the previous one so here there are some properties maybe in all properties i can remove to keep it simple this data dns policy let me come up with an minimal

00:08:28.420 --> 00:08:45.520
yaml specification that's it apart as you can see the api version where this the controller can handle this resource and the kind this part which means you are going to create a part if you are going to create a deployment kind will be deployment

00:08:46.140 --> 00:08:55.920
so here you are specifying the resource that you want to create and then the metadata for the resource which means the label that it should put on it the name of the part

00:08:55.920 --> 00:09:15.140
and then for the part you need to depending a specification under the specification you have containers which is in plural so you are going to give array of container specification but we have only one specification here let's say i'm going to call this container as in my container

00:09:17.220 --> 00:09:37.060
okay so this is an very simple basic part specification so you can write from scratch or you can or any for most of the commands you can do the dry run client flag and output there is an email and then use that as a base to create your own email for your applications

00:09:38.920 --> 00:10:07.040
okay so once you have the email file ready then all i need to do to apply this is cube cdl create if any of name of the file that's it you can just apply or create both does the same job if this is the first time you are creating the resource then you can just create if a resource already exists and you are doing an update and then applying you can use update

00:10:07.040 --> 00:10:35.420
apply command if i use apply and the resource doesn't exist apply will also do create okay so you can use anything create or apply so let's say i'm doing cube cdl create and this is going to this already created a part with the specification if i describe it you will see the container with the name icon maybe i can view the logs cube cdl logs my part can see that

00:10:35.420 --> 00:10:39.760
you can also specify the container name

00:10:42.020 --> 00:10:53.060
because it is just one container it printed the logs of it okay you can delete also with the file cube cdl delete if any of

00:10:55.640 --> 00:11:06.100
which means based on the specification that is there in the email it will check whether in the current infrastructure you have a matching object if that object is found it will be deleted

00:11:11.980 --> 00:11:35.380
okay so what we just learned creating an pod resource from man declarative specification how we created the email file the in the indentation in the email file is important two spaces and below labels we have key value pair that should come as after two spaces

00:11:36.100 --> 00:11:57.140
under the specification two spaces containers if it is array of value I fund and then then this should come there so that is important so as long as you are following that then your email file will work perfectly fine if there are some problem with the email file then while doing the create or apply it will

00:11:57.140 --> 00:12:03.300
throw some errors saying some problem with the syntax blah blah blah

00:12:03.300 --> 00:12:09.540
all right so creating and part specification by declaratively

00:12:11.240 --> 00:12:29.300
so here in this case this part specification is very very simple and straightforward now we are going to discuss how an typical production grade part specification will look like what all the additional things that you can find in any part specification

00:12:31.540 --> 00:12:38.600
okay so let me go through that one by one the first thing that we are going to see is

00:12:38.600 --> 00:12:53.260
it's meant for an specific use case I am going to talk about the problem scenario and then how it is implement how the solution is implemented in the kubernetes world that is the part two of the story part one of the story is if I

00:12:53.260 --> 00:13:00.060
have a container running it's I have a pod running apart with one container

00:13:01.120 --> 00:13:13.980
and that is running one application process maybe your microservice or nginx process is running on process it's already processing lot of records that are coming in a lot of records that are coming in its processing it

00:13:14.980 --> 00:13:24.440
what happens here comes on request while processing this request number 97 let's say something went wrong with your application process

00:13:24.980 --> 00:13:36.620
it crashed and then the process exceeded because it was the parent process was the only process let's say it was running it exceeded the container status will also

00:13:36.620 --> 00:13:52.620
will move to exited state as the processes needs to exit and then the container has nothing to run so container status will also move to exited state so if a container is status is exited then the platform will take an action

00:13:54.120 --> 00:14:03.920
platform means here I am referring to kubernetes kubernetes will take an action to restart that container or in some cases it may create and replacement for this

00:14:03.920 --> 00:14:18.860
when it is taking that action when kubernetes is taking that action it's observing the status of the path if the status is exited then it is restarting it when it moves to exited if the process exits

00:14:21.060 --> 00:14:35.860
ok vanilla approach straight forward here nothing complex here how about I have the same process running similarly it's executing many requests and here comes one request while processing that request

00:14:37.100 --> 00:14:49.160
your application went into some kind of stall mode or it went into some kind of deadlock situation which means whatever the request that goes after that it's all getting failure responses

00:14:50.220 --> 00:15:00.940
none of the access succeeds after this request all the users are getting error but in this scenario process is not exited it's still in the running state

00:15:00.940 --> 00:15:13.940
but it's not in a position to serve any request because it's in some kind of faulted mode let's say because this is not exited container status or part status will be showing up

00:15:15.120 --> 00:15:20.600
and platform won't take any action because from the platform's view container status is up

00:15:20.600 --> 00:15:45.980
but from the user's perspective the applications are not not at all accessible so this is a scenario this is the problem case where we really want kubernetes to consider or to identify these scenarios as well and take an action like restarting it or creating and replacement mostly it will do the restarts only restarts is going to be a first start of the application and that should work perfectly fine

00:15:46.980 --> 00:16:00.940
so instead of relying kubernetes relying only on the part status to determine the health of the application to determine the health of the application instead of relying on the part status

00:16:02.200 --> 00:16:13.360
we are going to take that to an one level deeper instead of checking on the part status we are going to do that health check at the application level

00:16:13.360 --> 00:16:32.360
which means this application is exposing some business end points right maybe if it is in product search service it's exposing some business end points in addition to those end points your developer will write a piece of code logic to expose and health end point

00:16:32.360 --> 00:16:58.820
and the logic in the end point can be as simple as if someone calls this end point send then success code 200 ok code as simple as that so this piece of logic resides close to your business code so once your developer exposes this end point then while submitting the yaml file what even will do is

00:16:58.820 --> 00:17:20.960
in the part specification yaml file that we just did one right there he will include one section called liveness probe and then he will specify the end point he will specify the interval with this information if he submits to kubernetes what kubernetes will do is

00:17:20.960 --> 00:17:32.820
it will call this end point every defined interval which means every 30 seconds kubernetes is going to invoke this end point locally

00:17:33.780 --> 00:17:44.240
and then maybe the first time it calls your application return 200 back to kubernetes so kubernetes won't take any action after 30 seconds

00:17:44.240 --> 00:18:07.360
again your service return 200 back to kubernetes service is healthy healthy healthy let's say there comes a time where let's say at the 10th 10th time I mean after 300 seconds or so your service returns a failure response which means kubernetes call the health end point assume this scenario happened

00:18:07.360 --> 00:18:25.440
and your health health call comes after that so as like users are getting error you are also going yes is that a question ok no as like how users are facing error this kubernetes will also get an error response

00:18:25.440 --> 00:18:53.300
something like that and then again after 30 seconds again an error after 30 seconds again an error so here in the configuration even will also mention the failure threshold how many subsequent failures failed responses I can consider as an failure to declare that service as an unhealthy so let's say the threshold he specified is 3

00:18:55.440 --> 00:19:19.660
so that's the time kubernetes will take an action so based on the specification provided by event kubernetes senses that this service is unhealthy and then it will immediately restart that service ok the action there is restart if your application fails liveness probe that's part will be restarted by kubernetes

00:19:19.660 --> 00:19:46.920
so that restart is going to be refreshed out of the application where you won't have this scenario it will look perfectly fine ok so that is liveness probe so in any application you will see liveness probe configured for the parts open the first sample the second one let's say

00:19:46.920 --> 00:20:04.920
so as like nginx let's say this is your application image this is a sample web application that we use for demo purpose it can be any image your application image that you event build or nginx use anything and as part of that application logic you must expose an end point

00:20:06.140 --> 00:20:35.040
ok so we provide that under the liveness probe section liveness probe healthy end point it can be an htp probe if your application is is is not exposing any apis it can be an pcb probe tcp probe or it can be execution of some commands exact probe there are three probes that you can do one in this application you can call an htp get htp end point

00:20:35.060 --> 00:20:55.060
or you can do a simple tcp probe or it can be an exact probe you have some shell commands that you want to execute that's going to determine whether application is healthy or unhealthy you can put that command here exact probe in this example we are using htp probe

00:20:55.060 --> 00:21:17.580
and then we are specifying initial delay second which means if the application starts don't do health check for first five seconds because it may be initializing that time and then if the call to the end point health end point doesn't return any response within once again consider that as a timeout timeout is also considered as a failure scenario

00:21:17.580 --> 00:21:40.060
interval every ten second this end point will be called by kubernetes and subsequent three failure means application is unhealthy you can consider that liveness probe fail if you if you get subsequent three and sequent three failure responses on this end point

00:21:40.060 --> 00:22:10.040
okay so let's see this in action let's see this in action right so before that the sample application that we are going to use this card application if i open card part you can see the specification similar to the one that i just shown it's just that the image is different and then the container port is using 8080 or nginx was using 80 that's the only difference okay let's let's execute this let's apply

00:22:10.040 --> 00:22:16.040
this configuration and let's view the card application that we are going to work for this demo

00:22:22.180 --> 00:22:22.820
okay

00:22:36.240 --> 00:22:39.380
i think it's a different folder let me check

00:22:40.860 --> 00:22:43.040
the downloads are documents

00:22:46.200 --> 00:22:47.340
okay

00:22:48.680 --> 00:22:53.040
it's in document ck hands-on within that right

00:23:01.860 --> 00:23:04.140
come on

00:23:07.920 --> 00:23:09.520
open in a terminal

00:23:40.000 --> 00:23:41.400
kubectl

00:23:41.400 --> 00:23:44.140
apply if nf 1 1

00:23:55.080 --> 00:24:01.780
card part ml and check whether the part is already running

00:24:03.920 --> 00:24:05.700
still creating

00:24:06.960 --> 00:24:08.620
still creating

00:24:09.840 --> 00:24:14.260
and put a watch flag and you can watch it

00:24:14.260 --> 00:24:23.260
okay it's running now it's an simple react web application let's see how the application looks like kubectl port forward

00:24:28.840 --> 00:24:30.260
name of the part

00:24:31.680 --> 00:24:35.500
any port number in the host machine and then the port within the container

00:24:37.000 --> 00:24:41.460
i am going to access this application localhost 8082

00:25:00.900 --> 00:25:02.140
come on

00:25:08.360 --> 00:25:10.260
there is a delay for a split second

00:25:11.520 --> 00:25:12.400
for me

00:25:17.940 --> 00:25:20.260
come on i'm trying to access this

00:25:26.200 --> 00:25:27.480
localhost 8082

00:25:28.820 --> 00:25:30.260
yep we are able okay

00:25:31.580 --> 00:25:34.720
i think it is serving from the cache

00:25:35.400 --> 00:25:40.260
yeah there it is so this is the simple application and

00:25:40.260 --> 00:25:40.780
and

00:25:41.960 --> 00:25:49.260
it has many tabs that we will use for demo sake okay but now let me go back here and stop the port forwarding

00:25:54.080 --> 00:25:56.260
let me try to refresh this once

00:26:03.880 --> 00:26:14.000
okay i stopped the port forwarding and i'm going to delete this part so that i can create the second file where i have the liveness con liveness prop configure

00:26:14.640 --> 00:26:24.540
because that part is also going to be in the same name so i'm deleting that kubectl apply so in this aml file we have the liveness prop configure

00:26:25.200 --> 00:26:27.940
i'm going to do the same set of steps

00:26:27.940 --> 00:26:34.040
one spot created i'm going to check whether it is running it is running to the port forwarding

00:26:36.360 --> 00:26:37.540
and open the browser

00:26:39.320 --> 00:26:40.000
browser

00:26:44.340 --> 00:26:46.900
refresh it got it right

00:26:46.900 --> 00:26:55.120
so here as is a demo application every time when kubernetes invokes the health endpoint as port office liveness probe

00:26:55.120 --> 00:27:05.320
it's getting printed here as you can see it's printed in this table you can see kubernetes is calling this application and this application is returning 200 back to the kubernetes

00:27:06.760 --> 00:27:21.940
okay so now we are going to intentionally fail the liveness probe to see how kubernetes react basically it's going to restart but we are going to see that so here we are going to fail for next three calls because three is the threshold

00:27:21.940 --> 00:27:29.280
so which means for the next three calls this application is going to send 300 500 500 some error response to the kubernetes

00:27:30.820 --> 00:27:32.740
and kubernetes will immediately

00:27:33.900 --> 00:27:43.960
will take an action because this application failed the liveness probe it will take an action the action in that case will be restarting that part

00:27:47.660 --> 00:27:48.940
as you can see it's already

00:27:51.140 --> 00:27:54.680
restarted you can see the port forwarding and everything stopped behind the scene

00:27:54.680 --> 00:27:57.180
if i go back there

00:27:58.920 --> 00:28:02.800
and if i try to do kubectl get parts

00:28:05.540 --> 00:28:10.020
you can see the restart count is already updated as an one thirty three seconds ago

00:28:10.760 --> 00:28:12.700
you can describe this part

00:28:19.600 --> 00:28:24.900
to view more details on the event section here it says

00:28:24.900 --> 00:28:26.040
liveness probe failed

00:28:26.700 --> 00:28:28.340
reason is unhealthy

00:28:29.040 --> 00:28:32.220
HTTP probe failed we set as for 500

00:28:32.220 --> 00:28:35.040
container failed liveness probe and it will be restarted

00:28:36.060 --> 00:28:39.280
and then the container restarted

00:28:42.420 --> 00:28:43.100
okay

00:28:47.440 --> 00:28:48.120
so

00:28:48.120 --> 00:28:57.540
to do that if if your application is unhealthy it's it will it can do the restart but for that you must have the liveness probe here

00:28:57.540 --> 00:29:00.460
or else if a scenario like this happens

00:29:01.780 --> 00:29:09.060
then kubernetes will be of no help here because kubernetes doesn't even aware of the fact that its users are getting error

00:29:09.060 --> 00:29:14.060
you need to declare the application as unhealthy for that we need to expose an health check endpoint

00:29:14.060 --> 00:29:18.420
if business endpoints are failing then the health check endpoint will also fail

00:29:18.420 --> 00:29:23.760
that will notify the kubernetes about it because it's constantly pulling that endpoint

00:29:23.760 --> 00:29:26.060
by that way it will be restarted

00:29:28.420 --> 00:29:32.880
any questions on the liveness probe before i give the stage for you to try

00:29:32.880 --> 00:29:35.440
is this clear liveness probe

00:29:36.060 --> 00:29:37.060
yes no

00:29:38.960 --> 00:29:40.320
perfect perfect

00:29:40.320 --> 00:29:43.060
so these commands are also there in the

00:29:44.640 --> 00:29:45.140
pdf

00:29:45.140 --> 00:29:50.060
or you can even try this set of commands that i tried i will put it in the etherpad for your reference

00:29:53.680 --> 00:30:00.060
okay launch the application go to liveness probe tab and then you can simulate that error scenario and see how kubernetes behaves

00:30:02.480 --> 00:30:03.480
all right

00:30:04.620 --> 00:30:06.080
so we have

00:30:06.080 --> 00:30:10.920
around seven minutes ten minutes for our lunch break

00:30:10.920 --> 00:30:14.800
so let's take this ten minutes to try this hands-on on liveness probe

00:30:14.800 --> 00:30:21.080
as well as the hands-on on this part to create and part specification declaratively

00:30:22.900 --> 00:30:24.420
let me include this command

00:30:24.420 --> 00:30:29.000
let's try that and then the liveness probe and then you can go for a lunch break

00:30:29.000 --> 00:30:31.080
and then be back by

00:30:31.080 --> 00:30:32.080
one pm

00:30:33.120 --> 00:30:39.640
is that okay for all you can you can go for lunch and then at the end you can do this exercise up to you i'm leaving it up to you

00:30:39.640 --> 00:30:42.980
to do an working lunch to complete these commands

00:30:43.940 --> 00:30:46.660
any questions before that

00:30:50.220 --> 00:30:51.180
all right

00:30:51.180 --> 00:30:52.080
yep

00:30:52.760 --> 00:30:56.380
i'm going to drop from this call i will be back by 12 55

00:30:57.200 --> 00:30:58.900
so have a great lunch

00:30:59.420 --> 00:31:03.840
and see you guys back by one pm thank you thank you for listening

00:31:35.020 --> 00:31:37.900
hello everyone yeah hello hello

00:31:40.320 --> 00:31:44.900
yeah so is it is it obvious on my face let me turn off the camera

00:31:46.100 --> 00:31:47.320
never just kidding

00:31:49.080 --> 00:31:50.280
okay so

00:31:51.100 --> 00:31:52.300
yeah okay

00:31:53.920 --> 00:31:56.760
i'm good but it just that my stomach is not good

00:31:56.760 --> 00:31:59.580
okay other parts are working fine

00:32:01.880 --> 00:32:10.760
okay so i just seen you all your chats and then sergeant said chat buzzy can you would you like to summarize that

00:32:11.300 --> 00:32:12.760
if you don't mind

00:32:13.740 --> 00:32:14.760
yeah yeah please

00:32:16.300 --> 00:32:17.700
uh-huh

00:32:21.420 --> 00:32:22.780
uh-huh

00:32:43.140 --> 00:32:45.940
uh-huh

00:32:49.980 --> 00:32:52.460
uh-huh

00:32:55.340 --> 00:32:56.740
uh-huh

00:32:56.760 --> 00:32:58.200
uh-huh

00:32:58.760 --> 00:33:00.560
each end we have

00:33:00.560 --> 00:33:05.340
okay we have only one end point right by the end point

00:33:08.180 --> 00:33:09.040
okay

00:33:09.760 --> 00:33:10.760
uh-huh

00:33:19.600 --> 00:33:20.760
uh-huh

00:33:32.500 --> 00:33:34.240
uh-huh

00:33:34.240 --> 00:33:34.760
uh-huh

00:33:36.760 --> 00:33:37.580
okay

00:33:37.580 --> 00:33:41.980
uh-huh

00:33:44.760 --> 00:33:44.860
uh-huh

00:33:44.860 --> 00:33:49.460
uh-huh

00:34:26.760 --> 00:34:28.760
Mmh, mmh, mmh.

00:34:35.060 --> 00:34:36.640
Mmh, okay.

00:34:52.460 --> 00:34:52.960
Mmh, mmh.

00:35:02.700 --> 00:35:04.300
Mmh, mmh.

00:35:04.540 --> 00:35:05.860
Mmh, mmh, mmh.

00:35:50.780 --> 00:35:54.900
Mmh, mmh, mmh.

00:35:54.900 --> 00:35:56.480
Mmh, mmh, mmh.

00:36:01.100 --> 00:36:06.180
Mmh, mmh.

00:36:07.340 --> 00:36:09.860
Mmh, mmh.

00:36:10.580 --> 00:36:12.680
Mmh, mmh, mmh.

00:36:14.560 --> 00:36:20.340
Mmh, mmh.

00:36:29.300 --> 00:36:33.720
Mmh, mmh, mmh.

00:36:38.380 --> 00:36:39.860
Mmh, mmh.

00:36:39.880 --> 00:36:44.420
Mmh, mmh, mmh.

00:36:56.200 --> 00:37:00.940
Mmh, mmh, mmh.

00:37:24.980 --> 00:37:30.220
Mmh, mmh, mmh.

00:37:52.420 --> 00:37:56.600
Mmh, mmh, mmh.

00:37:56.760 --> 00:38:03.480
Mmh, mmh, mmh.

00:38:47.800 --> 00:38:49.840
Mmh, mmh.

00:38:49.840 --> 00:38:49.860
Mmh, mmh.

00:38:49.860 --> 00:38:50.160
Mmh, mmh.

00:38:50.340 --> 00:38:56.340
I don't think we can have those kind of data in the New Relic backend.

00:38:57.920 --> 00:38:59.580
Do you think?

00:39:16.340 --> 00:39:18.480
Number of transactions received.

00:39:22.440 --> 00:39:23.640
Okay.

00:39:26.180 --> 00:39:27.640
Successfully processed.

00:39:33.540 --> 00:39:35.900
Correct. Correct. For that we don't need.

00:39:44.940 --> 00:39:48.720
Total amount of transactions received.

00:39:48.720 --> 00:39:51.160
Number and amount.

00:39:51.300 --> 00:39:54.200
Total amount of transactions successfully processed.

00:39:54.480 --> 00:39:56.740
Number of transactions is the count.

00:39:57.620 --> 00:39:59.360
Total amount of transactions.

00:40:00.500 --> 00:40:01.780
Total amount.

00:40:07.040 --> 00:40:07.560
Number.

00:40:13.400 --> 00:40:20.020
Yeah, but we need to, we can check with Circon on this.

00:40:20.120 --> 00:40:21.780
Because there are many amount, right?

00:40:23.040 --> 00:40:25.120
Charged amount or the delivery fees there.

00:40:25.160 --> 00:40:27.600
Is it going to be a combination of charged or delivered?

00:40:28.280 --> 00:40:30.320
Those things we need to clarify.

00:40:30.320 --> 00:40:33.580
Total amount of transactions received.

00:40:33.580 --> 00:40:35.200
Amount of transaction.

00:40:35.800 --> 00:40:37.320
What do you mean by amount here?

00:40:40.340 --> 00:40:42.600
Transaction successfully processed.

00:40:45.340 --> 00:40:48.620
I feel, I feel 0.12 and 0.34 are same.

00:40:49.740 --> 00:40:52.180
But we can better check with Circon.

00:40:52.240 --> 00:40:53.780
Is he available? Is there?

00:40:53.980 --> 00:40:57.080
Maybe we can drop a message to him.

00:40:57.240 --> 00:41:00.300
Asking what this amount of transaction means.

00:41:00.300 --> 00:41:03.920
Okay, I will ping him right away.

00:41:07.940 --> 00:41:10.460
Okay, and then the fourth one.

00:41:11.000 --> 00:41:14.280
Success rate equal number of transactions successfully processed.

00:41:14.640 --> 00:41:17.240
Divided by number of transactions received.

00:41:27.060 --> 00:41:27.580
Correct.

00:41:29.620 --> 00:41:32.220
Yes, we don't need it here.

00:41:34.220 --> 00:41:35.120
Error distribution.

00:41:35.520 --> 00:41:38.300
Which means this is from the error.

00:41:38.560 --> 00:41:40.120
So on the error event.

00:41:40.760 --> 00:41:42.260
Maybe here we need the JSON.

00:41:42.600 --> 00:41:44.600
We need to capture the type of errors.

00:41:46.020 --> 00:41:46.920
Let's see.

00:41:47.080 --> 00:41:47.220
Some of the errors.

00:41:47.220 --> 00:41:51.860
Something unavailable or some invalid input.

00:41:53.100 --> 00:41:55.220
There may be different kinds of error that we see.

00:41:56.960 --> 00:41:57.260
Correct.

00:41:58.140 --> 00:41:59.140
Correct. Correct.

00:42:01.880 --> 00:42:04.240
I understand for few SAP endpoints.

00:42:04.300 --> 00:42:06.580
They are returning some standard error format.

00:42:06.820 --> 00:42:10.780
But for others they are returning some random error messages, right?

00:42:12.880 --> 00:42:13.160
So.

00:42:13.260 --> 00:42:15.300
First we need to group it by error code.

00:42:15.900 --> 00:42:16.940
Status code.

00:42:17.200 --> 00:42:20.680
And then if possible in the JSON if we can also include the error message.

00:42:20.680 --> 00:42:21.300
It will be.

00:42:22.560 --> 00:42:24.880
Easy to identify the problem.

00:42:25.600 --> 00:42:28.000
Like we sometimes used to get that.

00:42:29.080 --> 00:42:31.100
If SAP is down we get some errors.

00:42:32.360 --> 00:42:34.320
Sometimes error for not found.

00:42:34.880 --> 00:42:37.600
Yeah, so in the error distribution we need to have a JSON.

00:42:39.160 --> 00:42:40.700
But for the others.

00:42:42.180 --> 00:42:43.500
We don't need a JSON.

00:42:43.500 --> 00:42:48.960
And we need to check whether the amount means whether it is the field that they are referring.

00:42:50.460 --> 00:42:50.720
Or not.

00:42:51.900 --> 00:42:54.000
I just let me give me a minute.

00:42:54.100 --> 00:42:55.520
Let me go ahead.

00:42:55.720 --> 00:42:57.880
Go ahead for the total amount of transaction.

00:43:08.980 --> 00:43:11.780
Yeah, received I think for the result we can.

00:43:14.260 --> 00:43:15.960
Received S is right.

00:43:16.000 --> 00:43:19.840
We already sending some APM data to the new relic.

00:43:21.000 --> 00:43:23.280
So received we can easily get it.

00:43:23.360 --> 00:43:28.600
But for successfully processed we need to send a custom even because successfully processed means.

00:43:29.440 --> 00:43:30.900
The SAP call.

00:43:31.700 --> 00:43:36.420
Should succeed and then on the success spot we need to capture that correct.

00:43:36.420 --> 00:43:38.440
So received received part.

00:43:38.560 --> 00:43:39.660
Yes, we can directly do that.

00:43:39.660 --> 00:43:42.840
But successfully processing or we need to do in custom event only.

00:43:48.480 --> 00:43:53.220
So what you are saying is you want the event type to be foot page journal entry.

00:43:53.540 --> 00:43:56.420
And then the attributes.

00:43:58.220 --> 00:43:59.420
Array will have.

00:44:01.060 --> 00:44:02.100
Let's see.

00:44:02.980 --> 00:44:03.500
Status.

00:44:04.820 --> 00:44:05.340
Queued.

00:44:05.960 --> 00:44:09.360
And for the different different status it's going to have different.

00:44:12.020 --> 00:44:14.880
And I can have one more attribute if I want.

00:44:15.300 --> 00:44:16.420
If I want to include some.

00:44:17.320 --> 00:44:18.560
Inputs like a JSON.

00:44:18.680 --> 00:44:22.460
Then I can have another key and then the object I can put the JSON.

00:44:23.120 --> 00:44:24.220
OK OK OK.

00:44:24.220 --> 00:44:28.360
So for the endpoint level we can have the grouping.

00:44:35.240 --> 00:44:35.840
Performant.

00:44:45.940 --> 00:44:48.800
One event four states are four events.

00:44:49.740 --> 00:44:51.260
That's that's what we need to determine.

00:44:51.540 --> 00:44:51.760
Right.

00:44:51.760 --> 00:44:54.480
So if it is going to be just one even four states.

00:44:54.720 --> 00:44:59.760
And we are going to query the same event for all different.

00:45:02.300 --> 00:45:03.760
Visualizations let's say.

00:45:06.380 --> 00:45:09.600
Same event for all different dashboards are visualizations.

00:45:10.640 --> 00:45:13.240
Will it have any impact on the performance.

00:45:14.460 --> 00:45:15.760
Even when you OK.

00:45:18.960 --> 00:45:20.760
And having a different events.

00:45:22.180 --> 00:45:25.160
Having one event capturing all the attribute values.

00:45:26.360 --> 00:45:29.680
OK I'm just seeing some blocks from the new relic.

00:45:29.880 --> 00:45:30.540
Yeah go ahead.

00:45:31.980 --> 00:45:33.760
Yes it's on the task success.

00:45:35.380 --> 00:45:37.500
We we yeah it's in the task description and.

00:45:38.200 --> 00:45:39.780
We think that it's about.

00:45:40.500 --> 00:45:42.000
Success response from SAP.

00:45:42.140 --> 00:45:42.880
We thought it.

00:45:43.880 --> 00:45:45.880
Mm hmm.

00:45:50.880 --> 00:45:52.160
Mm hmm.

00:46:18.460 --> 00:46:21.660
OK but what is the difference between one and three.

00:46:22.080 --> 00:46:23.360
Number of transaction received.

00:46:23.860 --> 00:46:25.460
Total amount of transaction received.

00:46:25.640 --> 00:46:27.420
Both are same right.

00:46:30.140 --> 00:46:31.040
OK process.

00:46:32.600 --> 00:46:33.860
OK OK OK.

00:46:34.580 --> 00:46:35.180
Yeah yes.

00:47:19.860 --> 00:47:21.700
Kind of lubrication.

00:47:46.120 --> 00:47:53.700
but i really doubt about this amount aggregation on on our integration side because this is

00:47:53.700 --> 00:47:58.400
something that directly vertical can provide we are just passing the data while passing

00:47:58.400 --> 00:48:02.000
the data we are capturing and showing the amount i am not sure whether

00:48:04.520 --> 00:48:10.020
by amount circumment this or the quantity he mentioned as an amount

00:48:10.020 --> 00:48:13.200
i already asked a question to him in the chat but but let's see

00:48:17.440 --> 00:48:18.040
okay

00:48:28.400 --> 00:48:29.260
okay