4:34
2025-01-08 09:31:44
22:26
2025-01-08 09:34:32
22:27
2025-01-08 10:16:32
24:59
2025-01-08 11:36:15
6:37
2025-01-08 14:09:18
Visit the Kafka for Administrator course recordings page
WEBVTT
-->
We'll check the monitoring and see how the topics are getting the messages.
-->
And we will go ahead and install the Grafana and Prometheus to see in our own system.
-->
If you are able to open the one which I have pinged, you should be able to see the dashboard
-->
like this.
-->
When we go through, I will check the topics and see, first I will go and check the user
-->
name, see the messages how they are coming in, you see for every 20 seconds we get a
-->
user with the score in the lives and the weight the level is, this is one of the topics
-->
which listens to the messages coming in from specific users.
-->
So let's see the stream lineage how actually the whole workflow goes through.
-->
See there are 3 49 producers, why there are 3 49 producers basically like every time
-->
you do play there is an asynchronously happens using the lambda function and it triggers
-->
the serverless option and inside the serverless it will push the message to the topic.
-->
These are all the different producers producing data.
-->
So ideally what happens is in this case every time a new producer comes in, he
-->
will actually produce only one message, he will not produce multiple because every
-->
time it triggers a new event in the internal links from the application point
-->
So then those producers whatever it is producing, it is going to the topic user name and we
-->
can see the number of partitions here and what is the bytes in and out, what is the
-->
bytes we are getting in and what is the bytes we are getting out and also the messages
-->
in and messages out from the topic and we can see the retention time, how long
-->
we can keep, as of now I kept it as 1 hour and retention size I have set it to infinite
-->
it depends by comparing whatever the value we want and we have the trainer policy as
-->
we made here.
-->
From here we have 2 ksql queries that we have added, one is stats per user
-->
and that means we need the details of every user, highest score, but we will take it
-->
off top here and the summary stats is basically like the whole, whoever has still played
-->
till now, it will be the summary of all those details, is what the summary stats
-->
query has, from here we have two more topics, one topic is primarily push this stats per
-->
user and another is to push the summary.
-->
These two are streams and these two are topics, see in general the Kafka uses internal
-->
topics to manage its offsets and all those stuff, those also you can see and observe
-->
here, the internal topics, so it is going to manage them internally to handle it.
-->
So now as you play along, you see here, we can see the throughput coming in, what
-->
is the consumption, but as soon as we get, is it getting consumed or not, basically
-->
it gives us the producers versus the consumption and as you do continually play around, there
-->
is a storage occupied, now you see there are only few of the matrices, there are
-->
more matrices we can do it, so what I will do is, I will explore these matrices
-->
to Prometheus and push them to Grafana, I will do it in my system along with me, you
-->
can also try to do it, basically like using this Prometheus configuration, we can scrape
-->
all the matrices from there and put it in our own Grafana dashboard, such that
-->
if you want to monitor more details or more information, we can do by our own cloud,
-->
since we are not using any cloud, we have our own VM, I will walk through the steps
-->
and how we can do it.
-->
Ok, so what are the, before that, I think everybody is able to see the dashboard, let's
-->
see, do we have it, anybody have the USB-C, from the HDMI to USB-C, can we get it from
-->
the HDMI to USB-C, can we get it from the HDMI to USB-C, can we get it from the
-->
HDMI to USB-C, can we get it from the HDMI to USB-C, can we get it from the
-->
HDMI to USB-C, can we get it from the HDMI to USB-C, can we get it from the HDMI
-->
to USB-C, can we get it from USB-C, can we get it from USB-C, can we get it from
-->
USB-C, can we get it from USB-C, can we get it from USB-C, can we get it from
-->
USB-C, can we get it from USB-C, can we get it from USB-C, can we get it from
-->
Yeah, this is a sample like one of the illustrated how we can use the real time data that is
-->
coming in stream it and kind of use it for different purposes.
-->
What we will do is we will take these matrices that is coming from the same this is going
-->
to be our cluster we are going to monitor all of them like let us say this is a schema industry
-->
there is a basic world and this is the cluster with all the Kafka nodes all of them but
-->
generally what happened is the matrices part of this will be the Kafka connect and all
-->
of that stuff we will use these matrices to kind of check in the graphola.
-->
What I will do is I will install Prometheus and start it.
-->
Prometheus is basically a scraping open source tool where it will scrape the information
-->
any logging framework and it will store it in the local and from there we use the
-->
graphola graphola is an open source matrices we into like you can do input from different
-->
sources and dashboard will be very good to manage and monitor like kind of monitor
-->
from there you can manage better the cluster.
-->
I have the file in this one we go to VM in the network on my drive I have the install
-->
pro file open it and go along with me one by one is that we will be able to install
-->
and connect the confluent cluster to our graphola.
-->
The first step like open any terminal anywhere generally I would say keep it somewhere may
-->
be in your student folder get the student folder click the right click open in terminal
-->
so you start it I mean any folder that will be fine but I am starting in the student
-->
folder so what I will do is I will take the we need to use the terminal to get all these
-->
things so what I mean the first step is I am going to get the Prometheus downloaded
-->
to my local once it is downloaded I am going to use the next step like basically
-->
unzipping it to my the same the same local so if you do unless you basically listing
-->
it I can see the Prometheus folder created I mean it got extracted I will do cd into
-->
that like I am going inside with the so we have different files so we modify the Prometheus
-->
OML as per the given OML here we do it is already given in the file the details as of
-->
now I am using the username password as mine to connect to the cloud so you can try using
-->
the same if you don't have it you can use mine kind of script so once we are in what you can
-->
do we do is remove the Prometheus to be local bin folder so just to make sure it is part
-->
of running as a back up so we will do the steps one by one to do so do move it to the
-->
Prometheus bin let me know if you are stuck at any one of the step then I am moving the
-->
Prometheus I am making two batteries one is Prometheus to get the end of lib and all
-->
the stuff you just need to copy paste it in the one and we can try to do it if you miss
-->
any one of the steps below so once we have it what we need to do we need to modify
-->
the file that is in ATC Prometheus folder what we can do is go to the file system
-->
inside the file system we have the ATC and inside ATC we can see there is a Prometheus
-->
have you done till now it is still here still here
-->
we have done I will say we will go to this area you see that Prometheus got camel there
-->
will be an existing job Prometheus will say it is going to take the local logs
-->
of itself Prometheus logs itself what we will do is we will add one more job that
-->
is from the component we can copy the job from this file or what I do is I get all
-->
the resources enough now I am giving you here the in the file we just have one of
-->
the resources let me save them for you to get all those you can just copy it from
-->
the are you able to copy from the here chat ok then that should be just get the
-->
I think it got copied but it took time ok or else you can just copy whatever that
-->
file what you need to do is you can take this job and paste it in the Prometheus
-->
we already have one job which is called as we will create one more job to make sure
-->
you make it formatting right or else it will go error so now I added one more
-->
job for content cloud so I am wondering I am going to pull the details so I am
-->
this one I just modified in the same wherever we have the Prometheus the initial file what
-->
I am going to do is I am going to move this file to the VTC pane so that is there in the
-->
next step in the line number 37 what I am going to do is I am going to move this to the
-->
then once it is moved you can go to the file system VTC