2:25:36
2024-12-16 09:21:57
2:03:59
2024-12-16 11:06:33
1:46:52
2024-12-16 13:50:28
1:56:02
2024-12-17 09:35:43
1:52:29
2024-12-17 09:39:32
1:51:17
2024-12-17 13:36:38
1:35:13
2024-12-18 08:59:20
23:23
2024-12-18 11:50:59
1:47:49
2024-12-18 13:41:04
Visit the Docker and Kubernetes Training Course course recordings page
United Arab Emirates - Docker and Kubernetes on AWS
WEBVTT--> Now try to delete one of these pods. --> Delete a pod and observe what happens. --> Now it get a new pod immediately. This is what happens. Now describe the pod --> and see how does it look. --> So when you destroy the pod, you can see that it is controlled by demon set. --> So in your deployments when you create it was controlled by your replica set. --> It's controlled by your demon set. --> So these are very important for agents like monitoring agent or could be a logging agent --> because these must be up and running anytime. So if you can still monitor it but it takes --> some time for it to monitor it, it goes down again to start it. So this is very important --> because you need these to be running on every node and hence it's a good practice --> your demon set wherein they will come up automatically even if they go down. --> In this very similarly how you did updates, loading updates, revision history those things --> you can do in a demon set very similar to what you do in deployment. --> Now next task in your demon set. --> I want you to see the task two. I'm going to explain the task two. --> My use case right now is I have three worker nodes, example four, one, two, three, four. --> And this is using SSD. --> This is using SSD. --> And these are using HDD. --> This is HDD service. Now I want to install demon set, which is running on SSD because I --> want to have I want to install different storage agent because the SSD again use case, --> second use case. I'm getting it everyone. --> So what is what you do is you go and add a label. --> On which node you want. --> And the demon set would run only on that particular node, which has the label as simple as it. --> So the term is node selector. --> So what happens scheduler would go ahead and search for a node which has this particular label. --> And once it decides on which node, QBlet would go ahead and create a demon set with that copy. --> Do this now. --> Task two. So we can. --> Add a label. --> Check the label got added. Do a get nodes. --> Show labels. --> Commanders. --> Show labels. --> It shows the labels of all your worker nodes on master. We have master. --> These are your label and the values. --> For worker one, these are your keys and the values. --> And worker two, these are keys. So now under worker one, you see that a new label got added. --> So you can add the same label on multiple nodes. --> Yep. So now I want to run only on this now. --> That's my use case. So now create a YAML file. --> And run that YAML file. --> And I saved. --> Now apply. --> And now do a ds move wide. --> You see 21. --> Has gone to. --> SST. --> So now get pods. --> I can own wide and you see working on worker one. So this is how you can define a specific application or a version to get created on a specific node. So go on a specific label names. --> Are we good? --> Everyone. The second use case. --> So you can use any labels, either the label like SSD or you can use it. You can also use the existing labels. --> Or you can create any labels as required. So same syntax, for example, you can do like this. You can also go and do the same label for other nodes also. --> Now let's delete the demon set. --> It's gone. --> Now the next scenario. Can we do on the master? Let's do this. The big one, but interesting. So access the link. I'll explain this. So my use case here is we talked about your master has a taint. --> So the taint. --> This will look something like this. Correct. Let me go back. Let me show you. --> So if I do a describe. --> So this so now. --> I can override this on a specific part by mentioning this label name. So for example, let's consider that you go for a movie. --> Without a ticket, they don't allow you. --> But if you have the ticket, you can you'll be allowed. So similarly, this would be referred in the pod. For example, since I'm referring something like this, you see control plane or they're using a master. --> So you modify something like this. --> And you would use. So they're using a first one. So they have two masters. One master, you see, this is your label. Exist, no schedule. So this is what I'm talking about. This is a label. For colon, they're using the word exist and then is your no schedule effect. --> Now once you run this, the fluency is going to run it on the even the master because you have an exception. Please run this. --> Next copy this and see does this get positive the master now. --> So it's a separate namespace. See which namespace is it. They're doing cube system. --> They're checking cube system namespace. --> And you see. --> So get parts. --> I have an O wide and namespaces cube system and you see it is having is running all the fluency on your master worker one and worker two because we had mentioned a condition to tolerate taint money for this use case exception. Not a good use case to untaint it because master is meant for --> management. --> Don't untaint it completely. --> The third use case where where you can create the deployments. Now the fourth one, the one you can do it or just observe. So now how do you update and roll back. --> So I have an application running a demand set. I want to update this to a new version. --> Like what did for your --> deployments. --> So it supports both on delete. It means you delete and then create a new version. Rolling updater. It creates a new version. It doesn't have any downtime. So both we can do. So example. This is how it looks. --> The default is rolling update. --> So you can create this, for example, and the version is two point five point two and then --> You say apply it gets created. Now what you do is you update the template or you can start from here also. So I think start from here. Update this. First YAML file is with two point five point two. --> I want to update this version in this. So what I do, I can either edit. --> And modify the version to different version. --> Or we can use the set image which used before and using this will modify the version to a different version. --> And see how does it get updated in your version. --> Let's do this. The link which I gave it to you. --> Go down and do these start from here. --> So before we start delete the existing one. --> We're talking about this YAML file. I'm not saying it because it's a big, very big file. Okay, paste it. --> And add on it. --> Same. See which namespace it belongs to. Namespace again at cube system. --> You see this. And again you come to do a word wide and you see the version number. --> The parts. --> At two point five point two. Okay, see in your Elastic DS. So now I want to update this to --> other version. So either you can do edit method and manually modify this or --> we would go ahead and do the set image property. Let's use this method and see what happens. --> This is going to update the new version immediately to 2.6. --> It is done. So now let's go ahead and on the previous command. You see the beta 2.6. --> And get the parts. --> You can do a display with the part. --> And you can do a grep. --> And then say fluency. --> Let me see what I'm doing. --> Describe the part 2.1d-elastic-search-4s64inset. Describe the part 2.1d. --> I will do the script. --> I'm good. I have to do it for some reasons. I'm not sure. --> I'm not sure why it's not coming. --> So now when you check for other one, you see it is your demon set. --> And you can see also version number in here. Part. So it is using version number 2.6.0. --> Are we good? --> So similarly, you can also record. So to make some changes, you can record --> and you can revert back to previous versions. So in case you make any change, --> you might want to record the change, the revision history, and then you can revert back. --> This is what we say. Same what we did before. --> You want to do it or you're good. --> If you want, you can do it last one. --> Try to make some change, record it, and see if we can revert back to previous changes, --> previous division. Confirm once done. Once you're done, you can delete the demon set. --> Play around. Confirm once you're done. --> Any questions? --> I see many questions. --> They're good. Okay. Now next topic is your state foot sets. --> What do you mean by stateful? What do you mean by stateless? --> What do you mean by stateless? And what could be mean by stateful? --> Yes. Any example? --> Okay. For example, your deployments are stateless or stateful? --> Deployments are stateless or stateful. --> Okay. Why did you say that? --> Okay. Got it. So what is stateless to summarize? --> So in your deployment, when you delete the deployment or delete the pod, --> it is creating a new pod, but you are losing the content of the existing pod. --> So the new pod comes up. It doesn't get the same data as what was there in the previous --> stateful. If the pod goes down and comes back, it will have in the same data, what was there before. --> This was stateful. So stateful will have the data preserved. --> Stateless will not have the data preserved. Will not. Stateful is yes. --> Yeah. So now stateful. So we have a topic called stateful sets --> in which it preserves multiple things. So it can preserve the data. --> So simple example of where we can use this is for example, let's consider that you're --> playing a game, online game. It could be a football game. So when the power goes off, --> it comes back or something happens, internet goes down. You'll be back --> to the same screen where you were. That's the stateful. --> And in case you restart and start, it means it is stateless, something like this. --> So it preserves the exact screen or snapshot of what exactly you're doing. --> That is what is your stateful set being used for. So it preserves the data. --> It also preserves the hostname of the existing ones. They go down, come back. --> And also ordering. Let's see all these things with examples. These are the main --> use cases. Preserve data. Preserve ordering. Or you can ordering of creation --> and termination. And these are very important when you talk about databases --> or big data because they require these data to be highly preserved. --> All these challenges. Yeah. We'll talk about now about stateful sets. --> This is an example of a stateful set. So in this now, I have a cluster of database. --> My use case is this is my primary. So it has to first create the primary --> and after that, it has to create a second tree. So it means that it has to do --> follow orders 0, 1, and 2. And these also will read replicas. --> It means any read this is applications. So any write operations has to go into here. --> And any read has to go into here. So these are read replicas. --> And we're using to preserve the configuration using config maps. --> We also have the DNS names getting stored. So this is what we're talking about in lab. --> This is my use case. Open this. We'll go demo of this. We cannot do it because it requires --> kops. It requires a dynamic story class. We'll observe what exactly is happening. --> So this diagram, what they're doing here is to understand this diagram. --> We have a config map, which is having data of MySQL primary and secondary. --> Second, you can classify in a config map like something like this. --> And then they're talking to create a service. --> You see, they're using a service. They say cluster IP is none. --> This is called headless service. Headless service would remember the host names --> of your parts or what are you creating yet. And then we're using also a service --> which is going to use for your cluster IP. If it don't mention any specific --> type of service, we're going to create a cluster service. This is for internal communication. --> And we also have stateful sets like MySQL, three copies. --> But the database takes a very long time to get created. And hence we're using engine X for --> the example. So zero, one, and two. And each of them are having unit containers. --> First, create these. And then your actual project gets created. --> All this will happen in the packet. You see health checks. --> So in enterprises, you do all you need to create stateful set, services, health checks, --> all this will be part of your YAML files. Very big one. Volumes and all. --> And you see it is also using your dynamic storage class. --> So when you do something like this, it creates a copy like zero, one, and two. --> And you see here, it will follow the order list. This will get created first, --> next, this ordering happening. The proper order. --> And you can also make some changes. You find an IP address. I'm just going here. --> So then we'll come back to this again. I just want to show you this. --> So when you want something like this in which you want to create, --> you want to have an ordering and all these things, then your answer is stateful sets. --> So stateful sets are used for proper ordering while creating. --> You create zero, one, and two. If you terminate, delete the two, --> one, and two contours. And I want to have these other data. --> I want to present the host names for these views. --> The headless service is a service which is going to remember the host names --> when something happens. And also, you have a pod with the storage. --> So when they come up again, they'll have the same claim attached to it. --> We use this for databases, for big data. All these are use cases where stateful sets are --> preserved because these are very important for your applications of databases, --> data warehouse for those. Now, same, you can do on delete, --> rolling update, you can do. Yeah, same syntax. --> Did I understand the use cases of stateful sets everywhere? --> Why stateful sets? What are stateful sets? --> Okay, we'll not be doing this because it requires dynamic. --> So I had written that demo. So you install the kops method. --> So what you get first with a service. And this would be headless service. --> And then we map it to a stateful set. I'm using two copies of them. --> The next STS, for example. This is your claim. --> The PVC. Create it, it'll create 0 and 1. --> Then you log into each of the pods and get the host names. --> Create a test pod. And from the test pod, you go inside it --> and do an endless lookup. You're trying to resolve. --> And they get resolved to an IP address. So this is your DNS server. --> They get the resolve to the IPs, like 72, 0.3, or 0.21. --> Yeah, you're going to see a 03. So now 04. --> These are IPs of your VMs. Now what you do is, you delete the pod. --> They come back again. And when they create it, --> or determine, you can see, when they create, they get the proper order. --> 01. And then again, you go and check they have the same host names. --> Now if you want to create a pod, go inside it, and again try to resolve it. --> It'll have dissolved the same IPs. Now you're going to scale up. --> For example, you scale up. You see 0, 1, 2, 3, 4, 5. --> It'll go in the proper order. And they'll have all the VPCs, --> PVG. These were very old, 23 minutes. --> They'll still be having the same PVC, TV, data, and the packet. --> They can roll down. They can do all the operations. --> They terminate in the reverse order. They can roll back to make a scale to less. --> And even if you roll back or delete anything, it'll still preserve your PVCs. --> It'll not delete it. So you have to manually delete the PVCs if you want to do it. --> And any updates in here, for example, you have straight-foot sets. --> You modify the template to a different version, like internet latest now. --> It'll also update in the proper order. First, it'll be your primary. --> In a second, they get updated. And you can delete the straight-foot set. --> When you try to do, let's try to do, but you can observe what happens. --> Let's try to run this. It gets stuck. --> It doesn't understand the three ones. So let's, for example, --> copy this. You copy and you paste it. --> And then you apply. --> Get STF and get the service. --> Okay. Now get pods, hypernode, y2. So it gets stuck. --> So let's create, because it's not, you can do a describe. --> The pod and says it requires a stateful, that is, it requires your --> dynamic storage class. --> Immediately looking for. We don't have that. --> And we cannot do it in your kube-adm. --> Okay. Now let's see what's happening in the back in PVC. --> You see this? You try to create a PVC automatically. --> It'll also create a PV, try to create a PV. --> Now let's do a scale up. So I'm going to say scale up your --> pod. Luckily, we can see what's happening. --> So we'll do a scale up for a pod. --> So now I do STS. --> Fine. So it's trying to create a back end and it'll be stuck --> because it's not working. Okay. Are we good? --> It'll not run, but this is your process. --> So concept is more important of stateful states. --> So now when you see this, the YAML files, observe this. --> So when you work in enterprises, you need to do all these things. --> You see this YAML file is just for stateful state, which I gave it to you. --> So you'll be using services. --> Internal, you'll be using services, which is headless. --> You'll be creating like they are stateful sets. --> And you will be using some volume mounts. --> You'll be unique containers. --> What do you want? All these, you can mention resources. --> How much resources you want. --> You can say this much you want. --> You define probing readiness probe. --> This example is giving you lots of all what you learned about in one single file. --> Let's go ahead and delete it. --> Now the last topic, security. --> Now, next one is Kubernetes security. --> Now let's consider that you're working for a company. --> And your administrator needs to provide access for you all. --> For example, they want to provide for Gene, might be for CK, you are the users. --> So based on what you are supposed to work on the project, they are supposed to provide you proper access. --> So they provide a mechanism that you can log in using a password or you can log in using a token. --> Or SSO, signal sign-off as authentication. --> Once you're logged in, what you can do. --> So they give you a modify access for your part. --> Might be read access for databases, parts, and to what level you can go. --> So if you are having a part, they can restrict that you can go only to TMP. --> Not more than this. --> You can go into like a wire folder, not more than this. --> And how many parts you can create and create. --> That's what admission. --> Authentication means how the users can log in. --> And once the users log in, what they can do. --> And to what level they can go. --> And what they can create or do that is called as admission controls. --> And this could be either given for you or for applications. --> So example, some of the application wants to communicate to cluster. --> How do they? --> We can define for users. --> We talk about groups. --> And we talk about some application communication normal talker service accounts. --> So for all these three, we can define policies, access controls, and all these things. --> So authentication is how they log in. --> Authentication means what access they get in. --> Admission control is to what level they can go. --> What they can do, all the admission controls. --> Normal users are you all. --> For working on the cluster, you are helping your customers. --> And service accounts are different like POD. --> Communicate with API. --> Using service account in the background. --> So example, you can see this. --> So when you have any PODs. --> Let me see. --> POD. --> Example. --> I do a describe the POD. --> And example, you see they use service account in the mouse. --> So they use service account for communication. --> Service accounts are not used in POD, API, or other services. --> They use service accounts. --> Now, how do we get started for giving access control? --> To authentication. --> So you can get authenticated using client certificates. --> Token files. --> Passwords. --> Account token. --> Open any. --> Lots of methods. --> So let's see this method. --> So users can authenticate by using certificates. --> Or do a lab using certificates. --> And then tokens. --> You can allow users to also log into SSO. --> We all know what is SSO. --> Do we know SSO? --> How does it work? --> Yes. --> No. --> For example, we all use SSO. --> Example, when you log into any website, --> they say that log in with a LinkedIn or Google. --> It means you are authenticating using a Google account, --> but you're logging on a different account. --> This is called as a federation or SSO. --> So you can either use some other websites federation --> or your office domain login, using which you can log on --> to Kubernetes without having an account in Kubernetes. --> That is what SSO is talking about. --> So in this example, if you want to do this, --> you need to have a connection like OpenID or OAuth. --> These must be configured and on premises. --> And these are providers who help for SSO. --> Once these are configured and you would get a URL --> provided by your administrator. --> You log in to your identity protocol. --> It means you log into your domain, --> which will hit your AD or LDAP domain. --> Once you log in, you get authenticated your office domain. --> On the other side, the OpenNorth Connect, --> it'll understand that you're authenticated. --> That will help you to go to the other end --> and gives you a token, which is JSON Web Token. --> And the token will have some duration. --> During that, you'll have some permissions --> to do some work in your Kubernetes. --> So the token would have permissions, the time duration. --> Using which you can start doing the work there. --> So part API, they communicate. --> All these will happen during that time --> and you get access to do the work. --> So log in, you get authenticated. --> The OpenID or OAuth will help you --> to generate a token in the backend. --> And then you can start doing the work --> based on the access given for you. --> So this is how we can restrict for users --> with outside or temporary access --> or even for applications, we can give this access. --> This is what is called as access. --> So you want to configure this, --> we're going to configure all these --> and then we provide access. --> So like this, we provide different mechanism --> for authentication. --> Now, once they are in, what they can do, --> that is called as authorization. --> Authorization means what the users can do, --> what the groups can do, --> what the service accounts can do, --> to what read, write and all. --> So example, we can define permissions for users. --> We can define permissions for groups. --> We can define which level they can go --> under which API resource. --> The permissions are get, list, create. --> Get and list are very, very minimal. --> Create, update, patch, watch, delete. --> You can mention which resource. --> And this sub resource, which namespace --> and which group they belong to. --> And you can use your API. --> If you're a developer, you can use API methods --> like put, get, host methods. --> Now, in here, we have different option again, --> always allow. --> It means complete open access. --> We also have always deny, complete deny access. --> And we have a method called as attribute control. --> Attribute base is wherein a single policy --> will have the permissions as well as the users --> and the group in a single policy. --> So example, this is your ABAC permissions. --> Alice. --> So giving access, Alice permission --> for doing the work. --> So do anything to all resources. --> We can complete access. --> So API version, ABAC, time policy, --> specification user, Alice to all and resources. --> So you're mixing both the permission --> and the user the same, which is not good. --> It's always good practice to separate the users, --> groups, and the policies. --> They can define for a cubelet, for example, --> for parts you're giving that they can do only read one lead. --> Now, cubelet want to give access. --> You say kind, policy, specification, users, cubelet, --> namespace, all namespaces, resource, events. --> So we are not separating these. --> So we want to have a better use case of how you manage it. --> That is where we use something called as a role-based access --> control, which is more preferred. --> So role-based access means example, --> you're a developer, you need to get only very minimal --> access, especially back ends. --> The administrative want to give full access. --> That is called as a roles and role bindings. --> So roles is a set of permissions, --> which is for a specific namespace. --> Cluster role is for the complete cluster. --> So the administrator will first create permissions like this. --> Roles, they want to give a read access. --> They define which level, API group, parts, get watch and list. --> So like this, they create different permissions --> for different namespaces. --> And then they bind this to a user or a group or a service account. --> Now for cluster roles, for example, --> you see that it doesn't have a namespace. --> This is for the complete cluster. --> So we can define for secrets. --> So they were on the same level. --> So now if you want to bind this, you have to bind it --> and say that role binding and kind could be user --> or a group or a service account you mentioned. --> And you mentioned the role and a role reference. --> So they get the access was defined in this particular role name. --> So the user Jane, we're going to read access. --> We can do the same for cluster roles in which you can mention. --> So that's what I'm saying. --> We don't mention namespace. --> Like this, we can define for multiple resources. --> Example, I can restrict to what level they can go. --> You can define it for config maps, for example. --> This complete domain, you can give access cluster roles. --> So now all these are on the same level in API. --> So I can mix one permission with all of these. --> All these are examples using which you can create roles --> and then bind with your backends. --> Are we good, everyone? --> What is authentication? --> What is authorization? --> And what are roles and role bindings? --> So role, bind it to users or to a group --> or service account, just through a namespace. --> Threshold binding is for across the cluster --> for users, service accounts and groups. --> Are we good? --> Okay, now by doing this, --> you need to assign certificate for the users or the groups. --> So you need to use certificates. --> Either you can use a third party certificate --> and the users logs in. --> They can authenticate in the backend. --> They exchange the keys, public and private keys --> and any communication between user and this. --> So you have to create a certificate --> and you have to bind the certificate to the user. --> So you get certificate --> and you have to bind the certificate to the user. --> And now when the user tried to access from the laptop, --> the traffic to other side goes in, --> it'll understand that the secure user --> and they allow communication between them. --> So client sent, hello. --> And this will confirm. --> They have the proper keys, private and all. --> And then they communicate. --> If you have own data certificate, --> you have to download those tools. --> You have to ask for that I need a certificate request. --> The certificate to approve it. --> And then once approved, you have a duration. --> You'd mentioned with the name, --> your company, the domain, everything. --> And then you download it. --> And then you start using it with the users, --> with the process. --> So for doing this, we have a policy, --> easier method to do it. --> We have open SSL. --> So open SSL is a free, --> but you can use it for playing around with certificates. --> So example, I first create a key, the key size. --> Using the key, we will create a certificate signing request. --> Then you want to approve it. --> So you say open SSL, X five zero nine, --> which is the certificate requirement. --> You mentioned this syntax and this is your certificate. --> There's a PKI. --> All these are the back end, what do you call it? --> Public key infrastructure we're talking about. --> And there's a certificate. --> And you mentioned the duration. --> And for doing this, it requires the details of your key also. --> So now once you did the certificate, --> you assign the certificate to the user. --> User one, client certificate. --> You mentioned the root is the one dot CRT. --> They look into where the certificate is and the key. --> So both the public certificate talking about --> and the public key, both must be there. --> Using this, the user's traffic will be encrypted. --> So now what we do is create a role, same what you saw. --> And create a role binding. --> So you can do, I have not mentioned here, --> how to do some step on this. --> So you want to say kubectl, apply. --> I think I missed it. --> Apply-role.yml. --> This step was missing. --> Then you say kubectl, get roles. --> Same, do the same. --> Do kubectl, describe the role. --> Name the role is what I can read. --> So now you have the roles. --> Now to bind it. --> So now you bind it by mentioning the user, --> what you created, user one. --> And say refer the particular user and apply it. --> So now the user will get an access. --> Now how can the administrator be very sure --> that they're given proper access? --> So we can use utility call as authcanine. --> So they can go ahead and test, --> authcanine get pods namespace user one. --> Yes, because they gave access. --> I will do delete now, it will be answer is no. --> So you can use different verbs --> or you can use different namespace --> or you can use different usernames. --> Whatever has been properly provided, --> only that would say yes. --> Others will be saying no. --> So this is how they can simulate --> the permissions before they want to give access. --> So once they simulate it, once they're confident, --> then they share these details to the users. --> This is how to do your roles. --> Similarly, we can create a cluster role --> in which we don't mention the namespace. --> The namespace is missing. --> And we can go ahead and create a similarly binding --> and then we bind the permissions and then bc test. --> Are we good everyone? --> Okay, let's do this lab now. --> So do the lab one. --> Copy and paste. --> And also, if you want, you can do a cat --> and see the keys. --> What I created, you can see this. --> For the default country and all, --> you can use default, just press enter. --> For the user key, we ask for anything, --> country name and all, just press enter or default. --> If you want, you can do it, but just doing default. --> Just press enter, enter, enter. --> All the default. --> Yeah, now do yourself. --> I'll give you an idea about how to do it. --> Any question you can ask me. --> Second chance. --> You're not copy the Amazon properly copied additional --> characters to the Amazon file. --> Can you open the Amazon file again? --> See those lines are not prepared. --> You see that you copied apply and those things. --> Yeah, do insert mode. --> I and do a bass bass remote. --> Press I first. --> Yeah, remove all the white ones. --> Yes, yes, yes. --> Now save it and proceed. --> Do apply a team. --> There are some commands missed. --> So do apply and miss that is not there. --> Apply of your roles describe and then proceed. --> Apply was not there after the yaml creation. --> Do apply get describe and then proceed. --> Did I play it? --> Sharks. --> Again, chat. --> For your role, you had to apply. --> The step was missing. --> Again, the chat. --> I'll read the lab once again. --> Who's talking now? --> Tell me, Charles. --> Is it out? --> Yes. --> Do a cat. --> Do a cat for this file. --> I think miss something. --> Enter. --> See what you have pasted. --> The first line API version missing is not there. --> I understand the mistake. --> The API version line is missing from the API version. --> Is it there? --> Yes, yes. --> So what you can do is remove this file. --> Do rm role binding dot yaml. --> Rm and space the file name. --> And again, create a new one. --> You can give any names. --> But the API version line was missing. --> It says the errors. --> It says version not set. --> Yeah, good. --> Now escape colon wq and run this file. --> Good. --> OK. --> Are this gene, geneway, are this ck, I have tested? --> Is it working? --> So you see that there are so many rules of cluster role --> to see it. --> Because every component, they communicate --> using these rules. --> So if we just re-describe cluster rules --> in your cube system, for example, you see lots of rules. --> OK. --> Yes, chance. --> Are you good? --> OK. --> OK. --> Are we good? --> Yeah. --> So this is how enterprises, they provide access --> for the users. --> We can move for groups or service accounts. --> We still have so much to want to get in. --> So much is there in Kubernetes. --> But you're trying to cover what is possible in the three --> days. --> OK. --> So now what we do is we'll take a break now. --> And after the break, we look into net. --> We still have some topics. --> We have network policies. --> And then you have ingress and ingress controller. --> Yeah. --> So I think time is right now. --> It is 3 PM approximately. --> Correct? --> We'll take a break. --> So break till 3.20 PM. --> So that's the moment. --> I am back. --> OK. --> Now let's continue. --> The next topic in security is network policy. --> What is network policy? --> For example, now you have a cluster. --> And let me go reverse. --> Use cases can access this link. --> So the user use cases are these. --> So we can click any other use case or this. --> So when you go down, so in my cluster, --> I have different parts. --> And I can say the parts to communicate or not to --> communicate. --> So for this example, I have a namespace is default --> and had namespace foo. --> This is foo namespace. --> I have a part. --> I have a part in default namespace. --> So now in here, you're saying that allow any traffic --> from here to here, but not the other way around. --> So this cannot communicate, but this can communicate. --> Similarly, this cannot communicate with this, --> but this can communicate with this. --> It's the colors, the end. --> So I want to do something like this. --> So you can do within a cluster or outside if you're --> having traffic or within your cluster. --> For all these use cases, if you want to allow --> the strict access, we define or do something --> called the network policies. --> Network policies are used to define deny traffic --> like these within a namespace across namespaces, --> which is the layer three and layer four. --> So you're talking about this one next. --> So network policies are all about how you can --> restrict access between the layer three and four, --> between the parts and within the parts --> or outside the world. --> And for doing this, we can use example like this. --> This has all the scenarios. --> So now in this scenario, you have network policies --> and you're defining the network policies --> for a part which has this role name. --> So any parts which has the label of this, --> we're defining policies for that. --> So defining what policy for that, both ingress means --> inbound and ingress means outbound. --> You mean what traffic can reach that it DB parts --> and what traffic can go out of these parts outside world. --> So ingress from, you're defining which range. --> And in this range, you can say that not to allow --> this particular range, like a sub-range. --> And in that IP range, you can say that --> which namespace that must come from, --> which parts names and from which port they can come in. --> Similarly, you can define where the traffic --> can go out from DB. --> It can go out to this IP range through this port. --> So like this, we can define multiple policies. --> Yeah, we're talking about specifications. --> We can use for selectors. --> You can use ingress, ingress. --> Ingress is what traffic is allowed to go out --> and I can see isolates. --> For this part in the default namespace, --> defining policy for both ingress and ingress. --> Ingress means what traffic is allowed, --> which namespace you're talking about. --> In that namespace, what is your labels must be the part. --> And they must be in the range of this --> because they're saying not this. --> So this is blocked here. --> In ingress, so like this, --> we can define options, labels, all these are examples. --> Default policies, this is going to deny completely. --> So you have a closed practice. --> It means no inbound traffic could be done. --> You can allow completely in which you will have an open basis. --> So any inbound traffic could be done --> through the existing which you allow. --> So they can say deny all traffic, close it. --> And you want to allow all traffic, you say make open basis. --> So all these are the default, either completely deny it or behind. --> So together we can also mix it like these rewrite policies. --> So this is uses in the backend talk for CNIs, --> the network like QProp3, the VNet, all these, --> they work together and allow the proper communicate to be done. --> So one more example. --> So defining for ingress policy for a part which has a label of DB. --> Now they can go to this network from the outbound port number is 3000. --> And where they can go, they can go to this port number on the other side. --> Are we good? --> In this hit, I think not the input number, --> I think talking about the range of port numbers. --> In this example, they're talking about between these port numbers, the start and end. --> Now let's do a small example for this. --> So that is good practice first to deny it and then we allow it. --> By default, the traffic is allowed completely. --> They can communicate in the namespace, they can communicate easily. --> So we'll deny it. --> Let's see the example of that. --> So this we're going to create a port. --> This is your backend. --> So I'm going to get a port with your service. --> So as I talked about, and you're going to create a frontend port and reach it. --> So what I'm saying here is you're trying to create a backend port first. --> This is your backend port. --> It could be like a database example. --> And you're mapping this to a service. --> As you learn more before that, it's not a good practice. --> I feel like my service. --> So now I want some other port to communicate to these. --> So you allow the communication in the service. --> That's what I'm doing. --> So trying to create a first backend port. --> The mapping and you're checking if they're able to reach for the service. --> That's your second step. --> Now what I'm trying to do is now you're trying to check whether you can reach. --> So now what you do is create a port and they try to reach the backend. --> You're in the service with IP. --> Some port, random port, able to reach. --> Because in default, in a namespace, all the ports can communicate to the backend. --> And good practice. --> So I won't deny it. --> So what I'm going to do is I'm going to deny policy, close, no open. --> Now I'll create again a test port and it tries to access the backend. --> It cannot because it denied it. --> But there was someone to communicate. --> So you need to allow someone to communicate. --> So what we now do is we modify the rule wherein I want some port to get --> a frontend port. --> So I'll modify that. --> Allow traffic from a port in the default namespace, --> which has the label as frontend. --> Apply. --> I confirm. --> I see the policy. --> Now what I do is I do a test. --> I create a port with the label name, frontend. --> And this rm high 5d means when you exit from the port, --> this test port will get deleted. --> That's the purpose of rm. --> We're going to create a port with this image, with this label. --> And once you come out of the port, the port will get deleted --> for testing because we don't want to do a manual. --> So now it will work because the label is matching. --> And you can see it. --> So you have to replace. --> These are the exact. --> You need to replace the service. --> Now we try to do a different one without a port label. --> It will not work because we're not defined the label. --> It has to match the frontend port label. --> This is how we can test it. --> So initially we are observing that with any port can communicate. --> Yes, they can communicate by default. --> Then you do a deny. --> They cannot communicate because denied it. --> Then you allow basic that they always allow the least privilege --> and able to see that the labels port is able to access it. --> Are we good? --> Yeah, let's start. --> So first step, delete any ports was created --> and observe the static IP and check to reach. --> So this is my static IP. --> It's getting created. --> This is your static IP. --> And this is my part. --> It's getting a part. --> The part created. --> Slow for me. --> Slow at your end. --> I will do it. --> Okay, let's proceed. --> You can proceed to the lab. --> Sorry. --> He's not getting it. --> Sorry, it's created now back. --> What's the doubt? --> Okay, same. --> Okay, delete it. --> June, I will do it. --> It's working for you. --> Same problem they're having. --> Why is that? --> Okay, let me check. --> I think the storage problem. --> Let me see. --> So. --> Create a normal part. --> Correct a normal part. --> What happens? --> It's working now. --> And do get parts. --> Okay, getting error, correct. --> Okay, not sure. --> Let me check the back-end team. --> Give me a minute. --> Hi, Raghav. --> Same. --> We're getting all of them. --> All of a sudden, we are getting some network issues and errors. --> All of a sudden. --> Yes, we're not able to create it. --> It is getting some error. --> Nobody is able to create. --> Yeah, yeah, at least. --> No problem. --> But why is it happening all of a sudden? --> See, till now 130 is good. --> Now all of a sudden, we're not able to create any part. --> Why is that? --> What can we do? --> Yeah, that's the photo of everyone. --> I can just send you the snapshot of the complete itself for everyone. --> So if I open any session of my user, let me show you the logs. --> Okay, this is what is happening. --> I send the logs. --> It's a common log for me and others also. --> So it is happening on the proxy error. --> We've met. --> No problem. --> Let me just check. --> Gene, is it the same still? --> Gene is working now. --> I did all the parts. --> Is that solution? --> Let's see. --> I want to have water in speak. --> Yeah, yeah, tell me. --> No, why is this happening? --> We should have some issues. --> We are having this all of a sudden. --> I'm not sure. --> Okay, one minute. --> Yes, Gene, does it work? --> Gene, does it work? --> Yeah, it didn't get you. --> So do a get pods. --> Do a get pods. --> Qt will get pods. --> Enter. --> Is it not going next comment? --> Yeah, it's got stuck. --> Okay. --> Can you do a service system CTL? --> System CTL, restart, space restart, QTL. --> Yeah, let's check for other same issue. --> Can you describe, okay, give me a minute. --> I'm taking the backend team, give me a minute. --> Yes, sorry, I got disconnected. --> Yeah, for everyone, automatically. --> I understand. --> Yeah, I do understand. --> See, it is not able to reach the network. --> So what is the reason I'm asking this? --> You said all the same, we increase the storage. --> It worked. --> So what is happening? --> I'm not. --> For example, if I don't mind, this thing will work. --> I'm not sure what to do now. --> See, I will use somewhere. --> See, there is something in the backend. --> See, example, if I'm using my own laptop, --> one account, it was fine. --> But when India buys ATVs more than enough current, --> why are we having the common storage we have right now? --> The backend. --> Now, it doesn't take so much. --> There is something somewhere we're putting a shared storage. --> We're using somewhere building a shared storage. --> Are we somewhere using a shared storage at the end of it? --> OK. --> So what to do? --> We have to consider what is the challenge now. --> So I can still do a restart. --> I can do all these things. --> But if we, for example, did this serve the error message? --> Did it serve the error message somewhere in the backend? --> So it means, so understand, so what could be the problem? --> It means some network coming in, correct? --> So what? --> Understand. --> So what is the solution now? --> So where is the mistake now? --> See, all of a sudden, this is going down. --> I'm not sure how is it going down. --> I'm checking the error message also. --> All of a sudden, it doesn't go like this. --> I'm quite surprised by what's happening. --> When I search this, I figure it out. --> OK, team, let's do something. --> So can you go and check the status of the pods, everyone? --> Let's do it like this. --> Let's go to K, get pods, all namespaces. --> Type in O and wide. --> So WeaveNet, can we add the WeaveNet again, everyone? --> Team, are you good? --> Are you with me? --> For some reason, the WeaveNet is gone. --> You see this? --> OK, animal proxy is gone. --> That reason, you see a key proxy, both are gone. --> Why is that? --> I'm thinking, OK. --> So to resolve it, what we'll do is let's do a kubeadm, OK? --> Reset hyphen force, OK? --> Let me just resolve it, and I'm --> going to do a kubeadm space in it. --> Yeah, do the same, and I'm now going to do the next step. --> OK. --> Rest. --> Yes. --> OK, let's see kubectl, OK? --> OK, this is coming up. --> Can you do the same, everyone? --> Yeah, reset. --> Sorry, you're going to come and reset? --> Yeah, do a init. --> Do a init. --> So do a reset on all the nodes and then join the token. --> Is this working for you? --> OK, then good. --> Who's not working? --> Do a reset. --> And get in a new token and join that. --> So I go to the master, get the token, run the token. --> And the token, paste it. --> OK, do the same for other node also. --> OK. --> OK. --> And next. --> OK, all good now. --> So now shut the lab. --> Let's see. --> Let me do this first. --> OK. --> Yeah, all good. --> Yes, Gene, all good. --> Others, do a reset and join and continue. --> After joining in, are you able to get a part? --> After you do a reset, do a get nodes and check if everything is ready. --> And see all the parts, props, everything is ready. --> Let me check for you. --> Did you do a reset on worker one, worker two also? --> No, no, do a get nodes. --> No, no, no. --> Just nodes. --> No, they're not joined. --> Not out of the network. --> Apply hyphen f. --> One command is there, correct? --> Did you run those three commands of kubelet, kubectl? --> Are you understanding? --> No, no, not these. --> The mkdir, I'll give you those. --> So run these three commands. --> Understood? --> I'm giving the list here. --> So run these three commands at once. --> Run these three commands and then add the network. --> Network, apply. --> No, no, no. --> You're not copying properly. --> Run each command properly, please. --> Run each command properly. --> You're not running the command properly. --> Run the mkdir first. --> Run each one completely. --> Enter. --> Wait, wait. --> Choose yes, wine. --> Choose wine. --> Yeah, run the next. --> Now run the next command we've given you. --> Apply command. --> The chat. --> Enter. --> Yes. --> Now, now type in kubectl get nodes. --> Get it assigned. --> I think, yeah. --> Net nodes. --> Yeah, do it. --> Type in yes. --> Enter, enter, enter, enter. --> First enter. --> Fine. --> Now kubectl get nodes. --> kubectl get pods. --> k get pods hyphen hyphen hyphen hyphen all hyphen main spaces space space hyphen o --> space wide. --> Yeah, enter. --> Yeah, all are good now. --> Proceed. --> Others, others, they will do it. --> Who's done with the lab? --> You're done. --> So I didn't get you. --> Yeah, do a restart of this one. --> Initialize. --> Okay. --> Gene, are you able to proceed now? --> Is it working? --> The pod? --> Okay, please do the lab now. --> Yes, Charles, what's happening? --> See, one second, one second, one second. --> You're not doing the step properly. --> Can you again copy those three commands? --> The mkdir, the three commands, not all the four. --> The first three, mkdir. --> Paste the three commands. --> Yes. --> Wait. --> Don't do anything. --> This will be a minute. --> Choose why. --> So you have to wait. --> You have to wait for response. --> Choose why. --> Press enter. --> Now copy the apply command. --> So for every command, you have to wait. --> Yes, enter. --> Now do ClueCTL get nodes. --> ClueCTL get nodes. --> So did you run on the other worker also? --> That's fine. --> That's fine. --> Just proceed. --> Do this one. --> That's fine. --> Just proceed on this. --> Proceed. --> Do next step. --> Do next step. --> Part. --> So now you have to exactly copy the direct command and replace that with your service IP. --> So you have the service IP. --> Replace service with IP what you have so that you don't make a mistake. --> So you want to copy that command to a node path. --> Copy this to a node path from wget. --> Copy this. --> And replace the service with IP. --> One with this IP with yours. --> So example like this. --> This is my IP. --> I have to copy this. --> And do that. --> Have a separate because every time they're really difficult for copy and paste. --> So be careful. --> 1097.118.12. --> And replace that where in wget replace with that where you don't have confusion data. --> And proceed. --> It's working. --> So now we'll deny it. --> Then create one more test pod. --> Okay. --> Are they doing good, Charles? --> Gene, everyone is working fine. --> You see that? --> Yeah, do slowly. --> It'll work. --> Either you can copy and paste or up to you. --> I think if you copy a notepad and paste it, --> Gene, it'll be quite difficult. --> Don't do this. --> Copy the complete line itself to a notepad, the wget line. --> So for example, this is my wget line. --> And update it inside this itself. --> Update, copy and paste. --> Yeah, yeah. --> Copy the notepad, update the document itself --> so that every time you don't do what you do, Charles. --> It's NET QOL. --> It's a single O, not two O. And you're --> re-scrolling a space, additional space --> is there after F and N. Yeah. --> Proceed. --> Are we good? --> OK. --> You can do the next lab, egress, --> in which how you can disallow the traffic to outbound. --> They cannot access the internet in the next lab, --> the egress lab. --> They're good. --> It's working now. --> OK. --> Gene, what's the doubt? --> They're good? --> Yeah, it's good. --> It means it doesn't work. --> OK. --> What's the next step? --> All steps done? --> Oh, you're good. --> Yeah. --> So it means that your lab is good. --> Fine. --> Good. --> So the next lab, OK, you can do yourself also. --> Outbound. --> So you are blocking outbound traffic. --> No outbound traffic. --> You're creating a pod and then defining a policy --> denying the outbound. --> So when you log into the pod and try to access any website, --> they cannot access it because it is blocked. --> Either you can refer that your service name using your name, --> like a web, or you can also refer using your IP. --> Both the names we can use. --> For services, either we can use a name --> or we can also refer it using your service IP. --> We're trying to use your web IP. --> Yeah, please go ahead. --> Step three. --> I'm back at blocking, Charles. --> You're able to understand the lab? --> OK. --> OK. --> So you're modifying one of the outbound rules, --> not the inbound rules. --> That's what it thinks. --> So now try to log into the pod --> and try to access any website or access to the lab. --> You cannot because your outbound is blocked. --> Yes. --> Are we good? --> Dean, does it work for you? --> OK. --> Junvi? --> OK, team. --> So this is how you can log into the pod. --> We could work. --> So hope you got. --> We learned both about inbound and outbound. --> Correct? --> OK. --> Charles, are we good? --> OK. --> OK, do that. --> Let's wait. --> Can you finish? --> And then let's proceed. --> Do a test of the outbound. --> Now, we're not doing that for testing. --> We're trying to reach a website. --> That's fine. --> That's fine. --> OK. --> It's getting a web. --> We're not doing that. --> Use the apparel. --> Use the apparel. --> And after you have to use a port 80, it's fine. --> Use web. --> Use web. --> Give web. --> Name as web. --> Remove the IP. --> Put as web. --> And colon 80 for slash. --> Yes, yeah. --> Enter. --> Press enter. --> So it will not understand 80. --> It's going to give a message that it cannot understand 80 --> because it's not going to port number 80. --> That's it. --> It means it's not going out. --> Understood? --> Thank you. --> OK. --> This is about your networking security. --> The last one, which is your ingress. --> Let me talk about this. --> Let's come back. --> This is the last topic. --> So how do you institute? --> We would use TLS. --> Control access or restrict access to a database. --> Enable audit logging. --> Rotate credentials. --> Encrypt data. --> And you want to get alerts and notification --> as soon as there are some vulnerabilities or incidents. --> This has the best practice to be followed. --> Now, what is ingress and ingress controller? --> Now, let me give an example. --> So have you all worked on load balancers before? --> Have you worked on load balancers? --> OK. --> Now, let me give an example. --> In Amazon, have you all worked on application load --> balancers? --> Everyone? --> Or any of you? --> This is Charles, ALB Charles, who said yes right now. --> Gene, have you worked on load balancer ALB? --> Let me give a small example of how it works. --> So now, for example, let's consider --> that we have a load balancer. --> And this load balancer is going --> to support different use cases. --> For example, it could be people can reach amazon.com. --> Or it can get the traffic amazon.com slash in. --> And it can get the traffic amazon.com slash US. --> Or it could be x.amazon.com, y.amazon.com. --> So you see here, you have the same domain, --> but multiple endpoints, something like this. --> So these are called as context-based routing. --> So same domain, but different endpoints. --> Now, the load balancer, we need to accordingly --> route the traffic to the back ends. --> So what does it do? --> You would be having in Amazon, call it listener. --> And in the listener, you write in the rule as to where to go. --> So in the back end, we call it target groups. --> So you'll have one group in which you --> will have instances, one and two. --> And this would point out to amazon.com. --> You will have one more group, which --> is tg2, in which you have instances three and four. --> This will point out to amazon.com slash in. --> So the listener would have some rules stating --> that if someone hits amazon.com.in, --> send the traffic to group two. --> Getting it? --> If someone hits group three of this, send it to group three. --> Are we good? --> Everyone? --> OK. --> So now, very similarly, in Kubernetes, we can do. --> So in Kubernetes, the same one, what we do first do is, --> you create my use cases. --> Simple example, I'm going to route amazon.com --> and some amazon.com slash in. --> This is my use case, using a load balancer in Kubernetes. --> So what I do, I create a deployment. --> So this is for amazon.com, in which I have two pods, --> pod one and pod two. --> This is pointing out to amazon.com. --> And I also have pod three and pod four. --> These are pointing out to amazon.com slash in. --> And I map it to a service. --> Service one would reach to these. --> And service two would reach to this. --> Clear? --> Till now? --> I have a deployment. --> One, you have two pods. --> And a deployment two, I have two pods. --> I map it to a cluster service. --> Clear? --> No. --> Checking deployment, two pods, a cluster service. --> This is done. --> So now, we have a load balancer. --> The user trying to reach to this website. --> So when they hit amazon.com, for example, --> now the load balancer needs to send the traffic to where? --> To this service. --> Do you agree? --> Getting it? --> When some user is using amazon.com load balancer, --> someone has to send the traffic to this service. --> Do you agree? --> So it reaches this where we use two components. --> So load balancer need to have an ingress controller. --> And it also uses the ingress rules. --> So ingress rules would allow it to reach to this one. --> So you would have written something --> like a default rule sent to this. --> If they hit slash in, you have written a rule stating --> that if someone hits slash in, send traffic to this. --> This rule is called as ingress rules. --> Are we clear? --> Yeah. --> Getting it? --> I'll repeat again. --> So for example, if this is your backend service, --> and this backend is where we're talking about, amazon.com. --> So you need load balancer. --> The load balancer doesn't understand --> to work ingress rules. --> It needs another component called ingress controller. --> And this controller and this works together, --> and then they use the ingress. --> That is what we do for normal load balancers. --> But we also have open source ingress controller, --> which is in the next, which acts both as a load balancer --> and the ingress controller. --> So if you use amazon load balancer, --> you have to additionally install ingress controller component --> along with the amazon load balancer --> and then define the rules. --> But in case if you go for IngenX controller, --> it has both of the components. --> The same. --> And you now try to write ingress rules. --> You go to this, or you go to this, go to this, --> go to this. --> That's it. --> So this will point out to, like amazon.com, --> it will point out to slash in, slash us, slash uk. --> Are we good? --> Ingress controller part would get created --> in case if you use in amazon, for example, ingress controller --> of IngenX, it will create the controller part --> in one of the nodes. --> And through that, the traffic goes to the back end. --> So when you use ingress controller, --> it will create ingress controller one of the nodes. --> And through that, the traffic goes to the back end. --> That is what is ingress controller. --> Are we good? --> OK, let's do a small lab. --> We have four. --> For networking, we don't have labs for monitoring. --> So now what we are going to do is we are going to deploy --> ingress controller. --> Let's run this. --> Everyone, create this YAML file in your master. --> Sorry. --> So put the YAML file in the master and apply it. --> So this will get created in a separate namespace called --> ingress-ingenx. --> So you can go and say get all in ingress-ingenx namespace. --> So once you apply it, go ahead and check --> what got created under ingress-ingenx. --> The ingress controller needs to separate namespace. --> And it will take some time. --> And it will get created in a minute. --> You see, it's creating the services. --> Go to node port. --> It will put load banser right now. --> It's fine. --> Let's see. --> I'll tell you what to do. --> It's creating your jobs. --> It's creating all your components of ingress --> controller. --> OK? --> Are we good? --> So now next step is it takes some time --> for load banser to be up and running. --> Let it wait. --> So now what we'll do next, let's go ahead --> and create our back end service, the deployment. --> This is my deployment. --> And I map it to a cluster service. --> So write the deployment of hello app. --> It got created. --> Now map this to a cluster service. --> We're going to map it to a cluster service. --> Now do a cube so it will get service. --> You see a new service got created, which is hello server. --> You can do a curl, get IP, and check --> if you're able to reach your internal application. --> So it is coming hello server. --> So we have a deployment. --> We have a cluster service. --> And we have in the load balancer ingress controller. --> Now we have to define the ingress rule. --> Are we good till now? --> So now for doing that, we'll create an ingress file. --> So go ahead and create an ingress rule. --> Then you say that if any traffic goes forward slash --> hello, then it must reach your back end service. --> Write in a rule. --> So we'll say ingress.yml, insert mode. --> And you go and paste that ingress file and apply it. --> So it got created. --> So you can also do a get ingress. --> And you see it got. --> And you can do describe ingress. --> And it will have in the rule hello. --> So if you say hello slash hello, --> it will reach your hello server service. --> And that will reach your back end port. --> Are we good till now? --> Please confirm. --> OK, Gene, the next step. --> Go ahead and create. --> OK, good. --> So now what do we do? --> The next step is now what happens --> is in our use case, the load balancer would not work. --> So we are going to modify the load balancer node port --> method. --> And we are going to say public IP --> called node port slash hello. --> And we must be able to see the output. --> Yeah. --> So now what we do, go back to your server --> and run the command kubectl get service. --> And then of this, get svc hyphen fn namespace --> or hyphen n ingress hyphen nginx. --> So it is pending. --> So now let us modify it. --> So we say edit svc, type in that name. --> This modifies edit inside otherwise. --> So kubectl edit svc, the service name, --> and the namespace. --> We go inside. --> Now go inside. --> And when you say last but one line, --> you see a type is load balancer. --> So we would modify this to node port, ns caps, ps caps. --> So remove the word load balancer. --> And in there, type in node port, n caps, p caps. --> ns capitals, ps capitals. --> And save this file. --> ns capitals, ps capitals. --> Yeah, good. --> Save this. --> Junvi, yeah. --> You want help? --> Now kubectl edit svc, edit edit. --> So get do edit, edit svc. --> In the end of the command, type in edit first. --> So clear, please. --> Do clear, type in clear. --> Yeah, k edit, k edit, svc, edit edit, --> edit, edit, space, service hyphen, --> the name of the service name. --> Ingress hyphen, ingress hyphen, nginx. --> OK, we'll be in the command a chat, please. --> We're not going properly making mistakes. --> Yeah, paste this command. --> Right-hand side of the command, paste that. --> Yeah, now go to last but one line, --> the last but one line. --> Go down, complete down. --> Yes, go up. --> Where is the load balancer now? --> Go up, yes. --> Go to insert mode, delete it. --> Press i. --> Press i, delete that word. --> Replace it with a node port, n caps, n capitals, --> n o d e node, p capitals, node port, n o d e p o r t, --> escape colon, save. --> OK, now we're done. --> So now, again, do a get service with the same command --> now, not edit, get service. --> You see the exposed to the port number, --> the first you see on port 80, --> supposed to access this. --> So access the public IP, colon the node port number, --> colon that node port number, --> forward slash hello. --> It takes some time. --> Do it on the node of worker one and worker two. --> Is it working on worker one for me? --> It will normally get created on the node --> where the controller got created, it hit from there. --> Where the controller got created, it hit from there. --> Others, it may not work. --> It will hit from your controller --> where it got created in the back end. --> OK, so since my controller got created --> on my worker one, I'm able to see it on worker one. --> Are we clear? --> So the controller got created on worker one --> and hence it is working for me. --> So we see my port, worker one. --> Or worker two. --> See mine is worker two. --> I got a worker two, working for me, worker two. --> Is it running for you all? --> Yes, Gene, does it work? --> No, no, it's wrong. --> Public IP, public with the public IP, --> not the private IP. --> The IP, the IP which is not the master one. --> Is the master machine, which machine is this? --> Now, paste the private of your worker one. --> Paste the other worker one, other node, node, other worker. --> No, no, no. --> What I'm saying is on which node it got created. --> Put a master, not like that. --> K, K, kubectl get all, get all, all, space all, hyphen n, hyphen space hyphen n, --> ingress hyphen nginx, nginx, nginx, yeah. --> Enter. --> Again, can use a hyphen o, y, use apparo and use the n, hyphen o, y. --> Enter. --> So the controller got created. --> You see the port got created on worker two, but above that, above that. --> Yeah, yeah, above that. --> Wait, the worker two, what is the worker two public IP, please? --> What's the worker two public IP? --> Yeah, that's it, understood? --> So this is the third one, the third one, ingress controller, third one. --> That's below that. --> Yes, the dh, yes. --> This is what the node is going to worker two. --> See, in the right side you can see worker two. --> So it goes to worker two, that controller, and then it reaches their back end application. --> Service. --> Okay, others? --> Are we good? --> Sorry, one second. --> That will do. --> Type in clear? --> Type clear? --> Clear, clear. --> Okay, now, do a k, get, k, get, svc, hyphen space hyphen n, space, no, one, --> one, n, n, n for Nancy, n, n, n, n. --> Yes. --> Space ingress hyphen nginx. --> Enter. --> Press enter. --> Okay, fine, fine. --> Now, what is your, can you give me the IP of your worker one, yeah, worker one public, --> so it is three, can you give worker one IP? --> Worker one, public IP. --> You're doing worker three, correct? --> Sorry, worker three and worker four. --> Okay, let me show you one. --> Yes. --> Let me, yeah, let me give you the output. --> How about you just test it, yeah. --> So, I'm doing this. --> The other one, other worker machine, 54, 54, 179, correct? --> That's what I'm trying now. --> I'm trying the 54 now, 22. --> 193 is over. --> Do you have, 227 you have, correct? --> Yeah, I'm trying 227 now. --> Okay, try this now in the browser. --> Try this in your browser. --> It will work like this. --> Are we good? --> Again, again the chat. --> Say you're there in the browser. --> HTTP double slash HTTP. --> Chat I've given you. --> Just paste that. --> HTTP colon double slash. --> The chat I've given to you. --> Are you pasting that? --> HTTP. --> Type in HTTP. --> I'll tell you. --> Colon double slash 54. --> Yeah. --> No, the end goes hello. --> The end after forward slash type in hello. --> Hello. --> Type in hello. --> H-E-L-L-O. --> Enter. --> Enter. --> It is 3099. --> That is okay. --> Sorry, 3096. --> 30996. --> That's it. --> Okay, team. --> Are we good, everyone? --> It's a simple example of how it can. --> So you can do any part. --> This is a hello example. --> You can give any part. --> You can have multiple parts. --> You have to create multiple services and you can map it to a. --> Yeah. --> The last one, monitoring. --> So we use Prometheus. --> We have very less tools for monitoring in Kubernetes. --> We can either use the probing which we talked about via metric server. --> The metric server is not very good. --> We have to use Prometheus. --> We can use Dashboard, but not very good, not secure in Kubernetes. --> So we use Prometheus, which gets the data. --> So you can install Prometheus. --> This gets the data. --> To the back ends. --> So how does it work? --> So once you install Prometheus. --> Okay. --> So it will have all these components of Prometheus. --> In a server, it will have alerts. --> So once you install Prometheus, it will have all the components. --> You see the exporters. --> So these exporters run in every node, including a master and every node it gets in. --> It gets the data from the back end and stores it in a server. --> So you want to monitor your cluster. --> You need to configure Prometheus. --> It will have these components that get the information and store it in your server. --> And then what do you do? --> You install Grafana. --> And Grafana will say that get the information from this Prometheus. --> So you would add a plug-in in Grafana. --> And the data comes from Prometheus. --> So you have to define what you want all these Prometheus and get the data from the back. --> Let me show you this. --> So I go to Prometheus. --> So Prometheus, there are lots of monitoring tools we have. --> Prometheus, DataDoc. --> We can use Plunk, all these tools. --> This is an open source tool, which is highly customizable visualization, but they don't have a proper dashboard. --> So if you want to install it, you can download. --> And you extract like this the software. --> And then you are supposed to write a file called as Prometheus.yml. --> In which you define what you want to extract, what data, what time interval. --> You define all those things in Prometheus.yml. --> And Prometheus works on port 1990. --> You can get in some information from the dashboard, probably extracts it. --> You define what you want, all these. --> And this information would be gotten a component called exporter. --> So exporter would go ahead and get all these components on the master, worker nodes and all. --> And it saves this information on your Prometheus server. --> Then what do you do? --> We use Grafana. --> So Grafana is an open source dashboard tool, which could be used for any tools you can integrate. --> For example, you can integrate with any cloud, on-premises. --> It's a free open source. --> It's more of a dashboard tools. --> So you can go inside this once it is configured. --> We have in plugins. --> So now I click on the plugins. --> And then I say that I want to install Prometheus. --> You get an option to integrate once you deploy it. --> And the data would start coming from a Prometheus. --> So you have to install Prometheus. --> You get a URL for the Prometheus. --> And you have to refer that data source in your Grafana. --> And they get the data in the front. --> So Grafana is an open source dashboard tool. --> In which you can create multiple. --> There is a plugin option. --> You choose the plugin and choose Prometheus, the URL. --> It will start getting the data in your dashboard. --> It looks something like this. --> Very brief about. --> So Prometheus is your extracting tool. --> You can install it. --> It has the components. --> And these are components. --> It can integrate with Grafana or other tools. --> In Grafana, once you install it, you can get information. --> And we are not talking about one more topic. --> Help charts. --> So you can look into it. --> You can easily install all these things using help charts. --> It is a package management tool. --> Using which you can install Grafana, Prometheus, all these quite easily. --> And you can play around with this. --> Next thing. --> Very brief about monitoring. --> Do you work on monitoring tools? --> Or no? --> Do you work on monitoring tools? --> Any monitoring tool you work? --> Which one? --> Oh, you are aware of this. --> Okay, good, good. --> Everyone is aware of this? --> The team? --> Okay, not sure. --> Silent. --> Yes, the same. --> So Prometheus is your scraping tool. --> From which you get the data. --> You have to configure the programs at YAML. --> And then integrate your Grafana, which you will see the details in this dashboard. --> Yes, team. --> So this is all about the topics. --> I think we exactly covered it. --> Exactly. --> This is three minutes. --> I hope you learned new things in this training. --> Everyone. --> You're happy with what objective you had. --> Is your objectives met? --> CK, Junvi, Gene. --> Are you happy with what you learned? --> Okay. --> So if you want to remember what you're supposed to do every day. --> Yes. --> You're supposed to practice. --> Right? --> So what you can do is you can also create own free account of AWS. --> I'm sure that your company will also provide you. --> And you can practice this labs anytime. --> It is not that you had to practice money. --> Now, anytime, keep on practicing. --> Practice makes you perfect. --> And practice will also give you solutions about how I can learn more. --> How to troubleshoot. --> But if you don't practice, the theory would not help for Kubernetes. --> You have to practice the same labs. --> Try to experiment and see how you can learn more. --> Yeah. --> Any questions? --> Yes. --> Any questions? --> Any group photo or something? --> No. --> Any. --> Do you take any group photo? --> Do you take any group photo? --> Do you do that normally? --> I'm not sure. --> I don't think so. --> But that was asking you in case you don't know issues, we can also. --> What are the problem? --> Who is it? --> Thank you. --> All the best. --> Joe and I have sent the email to Alex with the Google. --> I think. --> Yes. --> I sent it. --> Thank you. --> Looking forward. --> Bye bye. --> Bye bye.