1) How do you automate Kubernetes deployments ?
Developer checks in the application code into the code repository and then we
have to Dockerise that application code, create a container image and put that
container into a container Image Repository.
From there, we have to deploy that container image to a Kubernetes cluster
where the image will run as a container. The first part is called the build and
the next part is called deploy. Build a docker image using a Dockerfile, after
that, you save the container image into a container Image Repository. we deploy
the image into the Kubernetes cluster using Jenkins pipeline.
2) How do you secure your Kubernetes app ?
when it comes to security, there are two aspects for Kubernetes security. One
is the security of your application running on the cluster and the other is
DevSecOps is basically DevOps plus security, security of the container DevOps
lifecycle so for the application security, you secure your pod, namespace, node
using role based access control and IRSA, etc.
3) How do you cost/performance optimize Kubernetes app ?
When it comes to Kubernetes cost, first component is controlplane cost. This
cost is fixed. Most of your Kubernetes cost will come from your worker node,
number of worker nodes and the types of worker nodes. So how does the number of
worker nodes get chosen? So basically, when you define your container image in
a pod spec, you will define pod resource specification like how much CPU how
much memory you want this container image to use. This is where a lot of cost
optimization comes in. Because most of the time you will see unused CPU and
memory allocation. So just like your compute instance where you will allocate
more CPU and RAM for your compute instance like select high Tier even though
you are not using it.
So the way to optimize cost and performance is to detect the CPU and memory
wastage from the pod. In Kubernetes, you utilize metric server. So once you
install metric server metrics server can tell you how much CPU and memory is
being utilized in average, like gives you like 10 hours per day etc. Now if you
want to do this manually using all this data it will be a difficult task. So
you should utilize some tools which gathers this metric server data and then
gives you actionable cost and performance insights. So some of these tools are
CloudWatch container insights. So Cloud watch container insights works with
EKS, which gets the data and we'll show you Top 10 memory intensive parts etc.
And then you can dive deep and optimize the CPU memory specification. Some of
the other third party tools kubecost. It actually gives you like the dollar
amount how much you are wasting and gives you a few reduce this allocation you
will save this much money. CloudHealth super popular third party to Kubernetes
resource report. These are the few popular ones, but there are many more. So
its important to identify the biggest cost component mentioned that unused CPU
memory and then go into the thought process that how do you detect CPU memory
using this tools.
4) Tell me about a challenge you faced in Kubernetes ?
In Kubernetes each pod uses a pod IP address from your VPC. So as you grow
as your application grows, there's a chance that with a lot of concurrent pods
running at the same time, it will run out of address in your VPC. That's one of
the challenge. You can add additional subnet afterwards even after a cluster is
defined.
5) How do you scale kubernetes ?
There are two main main ways to communicate app one is Horizontal Pod
Autoscaler and then next is the Cluster Autoscaler where the number of nodes
increases. HPA Cluster Autoscaler. So let's say you have two worker nodes and
your application is running. worker nodes utilization is let's say like 50% or
something as your application traffic increases, you spawn more and more pods
on in those two worker nodes right. So at some point, these two worker nodes
will be at full capacity. So to scale more the Cluster Autoscaler will create a
more easy to work. This process takes a little bit of time, right depends on
your AMI to just to grab the AMI, the EC2 has to come up. So there will be a
little bit of latency. What if your application is like a super critical
application, you cannot afford that kind of latency. So with cluster over
provisioning, you will bring up all these two workers up. so let's say even
though two worker nodes are being used with your real application pods,
depending on your application, and for the other worker nodes, you run
this fake pods call the pause pods. And let's say slowly the first two worker
nodes goes to 100% utilization and when the traffic increases more, but easy
tools are already up and running so your application don't need to spend any
more time spinning up additional ec2's. All it will do is replace with your
actual application pods.
6) How do you upgrade Kubernetes cluster on cloud provider ?
Step 1) Upgrade master/control plane nodes first. We can upgrade master/control plane nodes without any downtime. The master nodes upgrade will have impact on services which are running on worker nodes giving no immediate scheuleds in Kubernetes cluster
Step 2) There are two types of upgrades available. One is In-Place upgrade and other one is out of place upgrade.
In-Place Upgrade:
You can upgrade the version of Kubernetes running on worker nodes in a node pool by specifying a more recent Kubernetes version for the existing node pool. For each worker node, you first drain it to prevent new pods starting and to delete existing pods. You then terminate the worker node so that a new worker node is started, running the more recent Kubernetes version you specified. When new worker nodes are started in the existing node pool, they run the more recent Kubernetes version you specified.
Out of Place Upgrade:
You can 'upgrade' the version of Kubernetes running on worker nodes in a node pool by replacing the original node pool with a new node pool that has new worker nodes running the appropriate Kubernetes version. Having drained existing worker nodes in the original node pool to prevent new pods starting and to delete existing pods, you can then delete the original node pool. When new worker nodes are started in the new node pool, they run the more recent Kubernetes version you specified.
The preferred way is out of place upgrade.
Steps for upgrading worker nodes using out of place upgrade method
Node Pools tab, and then click Add Node Pool to create a new node pool and specify the required Kubernetes version for its worker nodes.
For the first worker node in the original node pool, prevent new pods from starting and delete existing pods by entering: drain node < node name >
Repeat the previous step for each remaining worker node in the node pool, until all the worker nodes have been drained from the original node pool.
On the Cluster page, display the Node Pools tab, and then select Delete Node Pool from the Actions menu beside the original node pool.
If you have a Load Balancer configured for your worker nodes then you might have to switch the loadbalancer to point to the new IP addresses configured on new worker nodes.
7) Can you explain Kubernetes architecture ?
There are mainly two different types of nodes in Kubernetes cluster Master node
and worker node. In Master we have etcd cluster, controller, Scheduler and
kube-api and on the worker node we have container engine, kubelet and
kube-proxy.
ectd cluster is database and it stores the information in the key value format.
The controller(s) for that resource are responsible for making the current
state come closer to that desired state. The scheduler schedules the work to
different worker nodes.It has the resource usage information for each worker
node. The Kube API Server is the primary management component of the
Kubernetes. This is responsible for orchestrating all operations within the
cluster. It exposes the Kubernetes API, which external users use to perform
management operations on the cluster and the various controllers to monitor the
state of the cluster and make necessary changes as required. We use docker as
container engine for kubenetes to run the containers. Kubelet is an agent that runs
on each worker node and communicates with the master node. Kube proxy is a
proxy service that runs on each node and helps make services available to the
external host.
No comments:
Post a Comment