Search This Blog
Thursday, 9 March 2023
Thursday, 7 July 2022
Kubernetes Cluster design
1. Single Cluster
Pros:
- If you have only one Kubernetes cluster, you need to have only one copy of all the resources that are needed to run and manage a Kubernetes cluster. But this also includes other cluster-wide services, such as load balancers, Ingress controllers, authentication, logging, and monitoring. If you have only a single cluster, you can reuse these services for all your workloads, and you don't need to have multiple copies of them for multiple clusters.
- As a consequence of the above point, fewer clusters are usually cheaper, because the resource overhead with larger numbers of clusters costs money
- Administrating a single cluster is easier than administrating many clusters.
Cons:
- If you have only one cluster and if that cluster breaks, then all your workloads are down.
- A single incident like this can produce major damage across all your workloads if you have only a single shared cluster.
- If multiple apps run in the same Kubernetes cluster, this means that these apps share the hardware, network, and operating system on the nodes of the cluster. This may be an issue from a security point of view.
- If you use a single cluster for all your workload, this cluster will probably be rather large (in terms of nodes and Pods). The reason is that larger clusters put a higher strain on the Kubernetes control plane, which requires careful planning to keep the cluster functional and efficient.
2. Many small single-use clusters
Pros:
With this approach, you use a separate Kubernetes cluster for every deployment unit
- If a cluster breaks, the damage is limited to only the workloads that run on this cluster . All the other workloads are unaffected.
- The workloads running in the individual clusters don't share any resources, such as CPU, memory, the operating system, network, or other services.
- This provides strong isolation between unrelated applications, which may be a big plus for the security of these applications.
- If every cluster runs only a small set of workload, then fewer people need to have access to this cluster.
Cons:
- Each Kubernetes cluster requires a set of management resources, such as the master nodes, control plane components, monitoring and logging solutions.
- If you have many small clusters, you have to sacrifice a higher percentage of the total resources for these management functions.
3. Cluster per environment
With this approach, you have a separate cluster for each environment:
Pros:
- In general, this approach isolates all the environments from each other but, in practice, this especially matters for the prod environment.
- The production versions of your app are now not affected by whatever happens in any of the other clusters and application environments.
- Nobody really needs to do work on the prod cluster, so you can make access to it very restrictive.
- You can go as far as not granting access to the prod cluster to any humans at all — the deployments to this cluster can be made through an automated CI/CD tool.
- This should greatly decrease the risk of human error in the prod cluster, which is where it matters most
Cons:
- The main disadvantage of this approach is the missing hardware and resource isolation between apps.
- Unrelated apps share cluster resources, such as the operating system kernel, CPU, memory, and several other services.
- If an app has special requirements, then these requirements must be satisfied in all clusters.
Tuesday, 26 April 2022
Infrastructure as Code (IaC)
What is IaC ?
Infrastructure as Code (IaC) is the managing and provisioning of infrastructure through code instead of through manual processes.
With IaC, configuration files are created that contain your infrastructure specifications, which makes it easier to edit and distribute configurations. It also ensures that you provision the same environment every time. By codifying and documenting your configuration specifications, IaC aids configuration management and helps you to avoid undocumented, ad-hoc configuration changes.
Version control is an important part of IaC, and your configuration files should be under source control just like any other software source code file. Deploying your infrastructure as code also means that you can divide your infrastructure into modular components that can then be combined in different ways through automation.
Automating infrastructure provisioning with IaC means that developers don’t need to manually provision and manage servers, operating systems, storage, and other infrastructure components each time they develop or deploy an application. Codifying your infrastructure gives you a template to follow for provisioning.
Benefits of IaC ?
Provisioning infrastructure has historically been a time consuming
and costly manual process. Now infrastructure management has moved away
from physical hardware in data centers, though this still may be a
component for your organization, to virtualization, containers, and cloud computing.
With cloud computing, the number of infrastructure components has grown, more applications are being released to production on a daily basis, and infrastructure needs to be able to be spun up, scaled, and taken down frequently. Without an IaC practice in place, it becomes increasingly difficult to manage the scale of today’s infrastructure.
IaC can help your organization manage IT infrastructure needs while also improving consistency and reducing errors and manual configuration.
Benefits:
- Cost reduction
- Increase in speed of deployments
- Reduce errors
- Improve infrastructure consistency
- Eliminate configuration drift
- IaC tool examples
Server automation and configuration management tools can often be used to achieve IaC. There are also solutions specifically for IaC.
These are a few popular choices:
- Chef
- Puppet
- Red Hat Ansible Automation Platform
- Saltstack
- Terraform
- AWS CloudFormation
Monday, 25 April 2022
Terraform basics
What is Terraform ?
HashiCorp Terraform is an infrastructure as code tool that lets you define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share. You can then use a consistent workflow to provision and manage all of your infrastructure throughout its lifecycle. Terraform can manage low-level components like compute, storage, and networking resources, as well as high-level components like DNS entries and SaaS features.
How does Terraform works ?
Terraform creates and manages resources on cloud platforms and other services through their application programming interfaces (APIs). Providers enable Terraform to work with virtually any platform or service with an accessible API.
HashiCorp and the Terraform community have already written more than 1700 providers to manage thousands of different types of resources and services, and this number continues to grow. You can find all publicly available providers on the Terraform Registry, including Amazon Web Services (AWS), Azure, Google Cloud Platform (GCP), Kubernetes, Helm, GitHub, Splunk, DataDog, and many more.
The core Terraform workflow consists of three stages:
- Write: You define resources, which may be across multiple cloud providers and services. For example, you might create a configuration to deploy an application on virtual machines in a Virtual Private Cloud (VPC) network with security groups and a load balancer.
- Plan: Terraform creates an execution plan describing the infrastructure it will create, update, or destroy based on the existing infrastructure and your configuration.
- Apply:
On approval, Terraform performs the proposed operations in the correct
order, respecting any resource dependencies. For example, if you update
the properties of a VPC and change the number of virtual machines in
that VPC, Terraform will recreate the VPC before scaling the virtual
machines.
Why Terraform ?
Manage any infrastructure
Find providers for many of the platforms and services you already use in the Terraform Registry. You can also write your own. Terraform takes an immutable approach to infrastructure, reducing the complexity of upgrading or modifying your services and infrastructure.
Track your infrastructure
Terraform generates a plan and prompts you for your approval before modifying your infrastructure. It also keeps track of your real infrastructure in a state file, which acts as a source of truth for your environment. Terraform uses the state file to determine the changes to make to your infrastructure so that it will match your configuration.
Automate changes
Terraform configuration files are declarative, meaning that they describe the end state of your infrastructure. You do not need to write step-by-step instructions to create resources because Terraform handles the underlying logic. Terraform builds a resource graph to determine resource dependencies and creates or modifies non-dependent resources in parallel. This allows Terraform to provision resources efficiently.
Standardize configurations
Terraform supports reusable configuration components called modules that define configurable collections of infrastructure, saving time and encouraging best practices. You can use publicly available modules from the Terraform Registry, or write your own.
Collaborate
Since your configuration is written in a file, you can commit it to a Version Control System (VCS) and use Terraform Cloud
to efficiently manage Terraform workflows across teams. Terraform Cloud
runs Terraform in a consistent, reliable environment and provides
secure access to shared state and secret data, role-based access
controls, a private registry for sharing both modules and providers, and
more.
Wednesday, 6 April 2022
Create a ReplicaSet in kubernetes
Task:
The Nautilus DevOps team is going to deploy some applications on kubernetes cluster as they are planning to migrate some of their existing applications there. Recently one of the team members has been assigned a task to write a template as per details mentioned below:
Create a ReplicaSet using nginx image with latest tag only and remember to mention tag i.e nginx:latest and name it as nginx-replicaset.
Labels app should be nginx_app, labels type should be front-end. The container should be named as nginx-container; also make sure replicas counts are 4.
Note: The kubectl utility on jump_host has been configured to work with the kubernetes cluster.
Step 1) Create an yaml file with given specifications.
thor@jump_host ~$ cat rs.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: nginx-replicaset
labels:
app: nginx_app
type: front-end
spec:
replicas: 4
selector:
matchLabels:
type: front-end
template:
metadata:
labels:
type: front-end
spec:
containers:
- name: nginx-container
image: nginx:latest
Step 2) Deploy the replicaset yaml file
thor@jump_host ~$ kubectl create -f rs.yaml
replicaset.apps/nginx-replicaset created
Step 3) Validate the replica set
thor@jump_host ~$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-replicaset 4 4 4 40s
thor@jump_host ~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-replicaset-mgkwf 1/1 Running 0 48s
nginx-replicaset-ttj9l 1/1 Running 0 48s
nginx-replicaset-vp59p 1/1 Running 0 48s
nginx-replicaset-zkbhl 1/1 Running 0 48s
Wednesday, 16 March 2022
Deploy MySQL on Kubernetes
Task:
A new MySQL server needs to be deployed on Kubernetes cluster. The Nautilus DevOps team was working on to gather the requirements. Recently they were able to finalize the requirements and shared them with the team members to start working on it. Below you can find the details:
1.) Create a PersistentVolume
mysql-pv
, its capacity should be250Mi
, set other parameters as per your preference.2.) Create a PersistentVolumeClaim to request this PersistentVolume storage. Name it as
mysql-pv-claim
and request a250Mi
of storage. Set other parameters as per your preference.3.) Create a deployment named
mysql-deployment
, use any mysql image as per your preference. Mount the PersistentVolume at mount path/var/lib/mysql
.4.) Create a
NodePort
type service namedmysql
and set nodePort to30007
.5.) Create a secret named
mysql-root-pass
having a key pair value, where key ispassword
and its value isYUIidhb667
, create another secret namedmysql-user-pass
having some key pair values, where frist key isusername
and its value iskodekloud_tim
, second key ispassword
and value isTmPcZjtRQx
, create one more secret namedmysql-db-url
, key name isdatabase
and value iskodekloud_db4
6.) Define some Environment variables within the container:
a)
name: MYSQL_ROOT_PASSWORD
, should pick value from secretKeyRefname: mysql-root-pass
andkey: password
b)
name: MYSQL_DATABASE
, should pick value from secretKeyRefname: mysql-db-url
andkey: database
c)
name: MYSQL_USER
, should pick value from secretKeyRefname: mysql-user-pass
keykey: username
d)
name: MYSQL_PASSWORD
, should pick value from secretKeyRefname: mysql-user-pass
andkey: password
Note:
Thekubectl
utility onjump_host
has been configured to work with the kubernetes cluster.
Solution :
1.) Create a PersistentVolume mysql-pv
, its capacity should be 250Mi
, set other parameters as per your preference.
thor@jump_host ~$ cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 250Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/mysql"
persistentVolumeReclaimPolicy: Retain
thor@jump_host ~$ kubectl create -f pv.yaml
persistentvolume/mysql-pv created
2.) Create a PersistentVolumeClaim to request this PersistentVolume storage. Name it as mysql-pv-claim
and request a 250Mi
of storage. Set other parameters as per your preference.
thor@jump_host ~$ cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: mysql-app
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Mi
thor@jump_host ~$ kubectl create -f pvc.yaml
persistentvolumeclaim/mysql-pv-claim created
5.) Create a secret named mysql-root-pass
having a key pair value, where key is password
and its value is YUIidhb667
, create another secret named mysql-user-pass
having some key pair values, where frist key is username
and its value is kodekloud_aim
, second key is password
and value is YchZHRcLkL
, create one more secret named mysql-db-url
, key name is database
and value is kodekloud_db6
thor@jump_host ~$ kubectl create secret generic mysql-root-pass --from-literal=password=YUIidhb667secret/mysql-root-pass created
thor@jump_host ~$ kubectl create secret generic mysql-user-pass --from-literal=username=kodekloud_aim --from-literal=password=YUIidhb667
secret/mysql-user-pass created
thor@jump_host ~$ kubectl create secret generic mysql-db-url --from-literal=database=kodekloud_db9secret/mysql-db-url created
4.) Create a
NodePort
type service named mysql
and set nodePort to 30007
.
thor@jump_host ~$ cat svc.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql-app
spec:
type: NodePort
ports:
- targetPort: 3306
port: 3306
nodePort: 30007
selector:
app: mysql-app
tier: mysql
thor@jump_host ~$ kubectl create -f svc.yaml
service/mysql created
thor@jump_host ~$
3.) Create a deployment named
mysql-deployment
, use any mysql image as per your preference. Mount the PersistentVolume at mount path /var/lib/mysql
.
&&
6.) Define some Environment variables within the container
thor@jump_host ~$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql-app
spec:
selector:
matchLabels:
app: mysql-app
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql-app
tier: mysql
spec:
containers:
- image: mysql
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-root-pass
key: password
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: mysql-db-url
key: database
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-user-pass
key: username
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-user-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
thor@jump_host ~$ kubectl create -f deployment.yaml
deployment.apps/mysql-deployment created
Validate all the Services:
thor@jump_host ~$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/mysql-deployment-84f954fc46-hxg46 1/1 Running 0 2m21s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 70m
service/mysql NodePort 10.96.75.38 <none> 3306:30007/TCP 6m17s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mysql-deployment 1/1 1 1 2m21s
NAME DESIRED CURRENT READY AGE
replicaset.apps/mysql-deployment-84f954fc46 1 1 1 2m21s
Friday, 4 March 2022
Kubernetes Interview Questions
1) How do you automate Kubernetes deployments ?
Developer checks in the application code into the code repository and then we
have to Dockerise that application code, create a container image and put that
container into a container Image Repository.
From there, we have to deploy that container image to a Kubernetes cluster
where the image will run as a container. The first part is called the build and
the next part is called deploy. Build a docker image using a Dockerfile, after
that, you save the container image into a container Image Repository. we deploy
the image into the Kubernetes cluster using Jenkins pipeline.
2) How do you secure your Kubernetes app ?
when it comes to security, there are two aspects for Kubernetes security. One
is the security of your application running on the cluster and the other is
DevSecOps is basically DevOps plus security, security of the container DevOps
lifecycle so for the application security, you secure your pod, namespace, node
using role based access control and IRSA, etc.
3) How do you cost/performance optimize Kubernetes app ?
When it comes to Kubernetes cost, first component is controlplane cost. This
cost is fixed. Most of your Kubernetes cost will come from your worker node,
number of worker nodes and the types of worker nodes. So how does the number of
worker nodes get chosen? So basically, when you define your container image in
a pod spec, you will define pod resource specification like how much CPU how
much memory you want this container image to use. This is where a lot of cost
optimization comes in. Because most of the time you will see unused CPU and
memory allocation. So just like your compute instance where you will allocate
more CPU and RAM for your compute instance like select high Tier even though
you are not using it.
So the way to optimize cost and performance is to detect the CPU and memory
wastage from the pod. In Kubernetes, you utilize metric server. So once you
install metric server metrics server can tell you how much CPU and memory is
being utilized in average, like gives you like 10 hours per day etc. Now if you
want to do this manually using all this data it will be a difficult task. So
you should utilize some tools which gathers this metric server data and then
gives you actionable cost and performance insights. So some of these tools are
CloudWatch container insights. So Cloud watch container insights works with
EKS, which gets the data and we'll show you Top 10 memory intensive parts etc.
And then you can dive deep and optimize the CPU memory specification. Some of
the other third party tools kubecost. It actually gives you like the dollar
amount how much you are wasting and gives you a few reduce this allocation you
will save this much money. CloudHealth super popular third party to Kubernetes
resource report. These are the few popular ones, but there are many more. So
its important to identify the biggest cost component mentioned that unused CPU
memory and then go into the thought process that how do you detect CPU memory
using this tools.
4) Tell me about a challenge you faced in Kubernetes ?
In Kubernetes each pod uses a pod IP address from your VPC. So as you grow
as your application grows, there's a chance that with a lot of concurrent pods
running at the same time, it will run out of address in your VPC. That's one of
the challenge. You can add additional subnet afterwards even after a cluster is
defined.
5) How do you scale kubernetes ?
There are two main main ways to communicate app one is Horizontal Pod
Autoscaler and then next is the Cluster Autoscaler where the number of nodes
increases. HPA Cluster Autoscaler. So let's say you have two worker nodes and
your application is running. worker nodes utilization is let's say like 50% or
something as your application traffic increases, you spawn more and more pods
on in those two worker nodes right. So at some point, these two worker nodes
will be at full capacity. So to scale more the Cluster Autoscaler will create a
more easy to work. This process takes a little bit of time, right depends on
your AMI to just to grab the AMI, the EC2 has to come up. So there will be a
little bit of latency. What if your application is like a super critical
application, you cannot afford that kind of latency. So with cluster over
provisioning, you will bring up all these two workers up. so let's say even
though two worker nodes are being used with your real application pods,
depending on your application, and for the other worker nodes, you run
this fake pods call the pause pods. And let's say slowly the first two worker
nodes goes to 100% utilization and when the traffic increases more, but easy
tools are already up and running so your application don't need to spend any
more time spinning up additional ec2's. All it will do is replace with your
actual application pods.
6) How do you upgrade Kubernetes cluster on cloud provider ?
Step 1) Upgrade master/control plane nodes first. We can upgrade master/control plane nodes without any downtime. The master nodes upgrade will have impact on services which are running on worker nodes giving no immediate scheuleds in Kubernetes cluster
Step 2) There are two types of upgrades available. One is In-Place upgrade and other one is out of place upgrade.
In-Place Upgrade:
You can upgrade the version of Kubernetes running on worker nodes in a node pool by specifying a more recent Kubernetes version for the existing node pool. For each worker node, you first drain it to prevent new pods starting and to delete existing pods. You then terminate the worker node so that a new worker node is started, running the more recent Kubernetes version you specified. When new worker nodes are started in the existing node pool, they run the more recent Kubernetes version you specified.
Out of Place Upgrade:
You can 'upgrade' the version of Kubernetes running on worker nodes in a node pool by replacing the original node pool with a new node pool that has new worker nodes running the appropriate Kubernetes version. Having drained existing worker nodes in the original node pool to prevent new pods starting and to delete existing pods, you can then delete the original node pool. When new worker nodes are started in the new node pool, they run the more recent Kubernetes version you specified.
The preferred way is out of place upgrade.
Steps for upgrading worker nodes using out of place upgrade method
Node Pools tab, and then click Add Node Pool to create a new node pool and specify the required Kubernetes version for its worker nodes.
For the first worker node in the original node pool, prevent new pods from starting and delete existing pods by entering: drain node < node name >
Repeat the previous step for each remaining worker node in the node pool, until all the worker nodes have been drained from the original node pool.
On the Cluster page, display the Node Pools tab, and then select Delete Node Pool from the Actions menu beside the original node pool.
If you have a Load Balancer configured for your worker nodes then you might have to switch the loadbalancer to point to the new IP addresses configured on new worker nodes.
7) Can you explain Kubernetes architecture ?
There are mainly two different types of nodes in Kubernetes cluster Master node
and worker node. In Master we have etcd cluster, controller, Scheduler and
kube-api and on the worker node we have container engine, kubelet and
kube-proxy.
ectd cluster is database and it stores the information in the key value format.
The controller(s) for that resource are responsible for making the current
state come closer to that desired state. The scheduler schedules the work to
different worker nodes.It has the resource usage information for each worker
node. The Kube API Server is the primary management component of the
Kubernetes. This is responsible for orchestrating all operations within the
cluster. It exposes the Kubernetes API, which external users use to perform
management operations on the cluster and the various controllers to monitor the
state of the cluster and make necessary changes as required. We use docker as
container engine for kubenetes to run the containers. Kubelet is an agent that runs
on each worker node and communicates with the master node. Kube proxy is a
proxy service that runs on each node and helps make services available to the
external host.
Friday, 11 February 2022
Kubernetes Cluster Architecture and Components
A Kubernetes cluster consists of a set of control plane (master ) and worker machines, called nodes,
that run containerized applications. Every cluster has at least one worker node.
The worker node(s) host the Pods that are the components of the application workload. The control plane manages the worker nodes and the Pods in the cluster. In production environments, the control plane usually runs across multiple computers and a cluster usually runs multiple nodes, providing fault-tolerance and high availability.
Control Plane Components
The control plane's components make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster events (for example, starting up a new pod when a deployment's replicas field is unsatisfied).
Control plane components can be run on any machine in the cluster. However, for simplicity, set up scripts typically start all control plane components on the same machine, and do not run user containers on this machine.
kube-apiserver
The API server is a component of the Kubernetes control plane that exposes the Kubernetes API. The API server is the front end for the Kubernetes control plane.
The main implementation of a Kubernetes API server is kube-apiserver. kube-apiserver is designed to scale horizontally—that is, it scales by deploying more instances. You can run several instances of kube-apiserver and balance traffic between those instances.
etcd
Consistent and highly-available key value store used as Kubernetes' backing store for all cluster data.
If your Kubernetes cluster uses etcd as its backing store, make sure you have a back up plan for those data.
kube-scheduler
Control plane component that watches for newly created Pods with no assigned node, and selects a node for them to run on.
Factors taken into account for scheduling decisions include: individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, and deadlines.
kube-controller-manager
Control plane component that runs controller processes.
Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.
Some types of these controllers are:
• Node controller: Responsible for noticing and responding when nodes go down.
• Job controller: Watches for Job objects that represent one-off tasks, then creates Pods to run those tasks to completion.
• Endpoints controller: Populates the Endpoints object (that is, joins Services & Pods).
• Service Account & Token controllers: Create default accounts and API access tokens for new namespaces.
cloud-controller-manager
A Kubernetes control plane component that embeds cloud-specific control logic. The cloud controller manager lets you link your cluster into your cloud provider's API, and separates out the components that interact with that cloud platform from components that only interact with your cluster.
The cloud-controller-manager only runs controllers that are specific to your cloud provider. If you are running Kubernetes on your own premises, or in a learning environment inside your own PC, the cluster does not have a cloud controller manager.
As with the kube-controller-manager, the cloud-controller-manager combines several logically independent control loops into a single binary that you run as a single process. You can scale horizontally (run more than one copy) to improve performance or to help tolerate failures.
The following controllers can have cloud provider dependencies:
• Node controller: For checking the cloud provider to determine if a node has been deleted in the cloud after it stops responding
• Route controller: For setting up routes in the underlying cloud infrastructure
• Service controller: For creating, updating and deleting cloud provider load balancers
Node Components
Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment.
kubelet
An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn't manage containers which were not created by Kubernetes.
kube-proxy
kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
kube-proxy uses the operating system packet filtering layer if there is one and it's available. Otherwise, kube-proxy forwards the traffic itself.
Container runtime
The container runtime is the software that is responsible for running containers.
Kubernetes supports container runtimes such as containerd, CRI-O, and any other implementation of the Kubernetes CRI (Container Runtime Interface).
Pods
Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.
Basics of Kubernetes
What is Kubernetes?
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
The name Kubernetes originates from Greek, meaning helmsman or pilot. K8s as an abbreviation results from counting the eight letters between the "K" and the "s". Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google's experience running production workloads at scale with best-of-breed ideas and practices from the community.
Kubernetes provides you with:
Service discovery and load balancing Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.
Storage orchestration Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.
Automated rollouts and rollbacks You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.
Automatic bin packing You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.
Self-healing Kubernetes restarts containers that fail, replaces containers, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve.
Secret and configuration management Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.
Create a Docker Network
Task:
The Nautilus DevOps team needs to set up several docker environments for different applications. One of the team members has been assigned a ticket where he has been asked to create some docker networks to be used later. Complete the task based on the following ticket description:
news
on App Server 1
in Stratos DC
.macvlan
drivers.172.28.0.0/24
and iprange 172.28.0.1/24
.thor@jump_host ~$ ssh tony@stapp01
[tony@stapp01 ~]$ sudo su -
[root@stapp01 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@stapp01 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
1b1a8c1ac8b5 bridge bridge local
d8defea600a4 host host local
f2508f2f61ea none null local
[root@stapp01 ~]# docker network create -d macvlan --subnet=172.28.0.0/24 --ip-range=172.28.0.1/24 news
f98779e735fe2dc56c1e7a93a9e615f78bc5324d283ff1478f64b0e0400a7389
[root@stapp01 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
1b1a8c1ac8b5 bridge bridge local
d8defea600a4 host host local
f98779e735fe news macvlan local
f2508f2f61ea none null local
[root@stapp01 ~]# docker network inspect news
[
{
"Name": "news",
"Id": "f98779e735fe2dc56c1e7a93a9e615f78bc5324d283ff1478f64b0e0400a7389",
"Created": "2022-02-11T22:14:03.425043109Z",
"Scope": "local",
"Driver": "macvlan",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.28.0.0/24",
"IPRange": "172.28.0.1/24"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
Saturday, 29 May 2021
Git Manage Remotes
The xFusionCorp development team added updates to the project that is maintained under /opt/news.git repo and cloned under /usr/src/kodekloudrepos/news. Recently some changes were made on Git server that is hosted on Storage server in Stratos DC. The DevOps team added some new Git remotes, so we need to update remote on /usr/src/kodekloudrepos/news repository as per details mentioned below:
a. In /usr/src/kodekloudrepos/news repo add a new remote dev_news and point it to /opt/xfusioncorp_news.git repository.
b. There is a file /tmp/index.html on same server; copy this file to the repo and add/commit to master branch.
c. Finally push master branch to this new remote origin.
Wednesday, 26 May 2021
Puppet Setup File Permissions
Create a Puppet programming file official.pp under /etc/puppetlabs/code/environments/production/manifests directory on master node i.e Jump Server. Using puppet file resource, perform the below mentioned tasks.
File beta.txt already exists under /opt/finance directory on App Server 3.
Add content Welcome to xFusionCorp Industries! in file beta.txt on App Server 3.
Set permissions 0777 for file beta.txt on App Server 3.
Note: Please perform this task using official.pp only, do not create any separate inventory file.
Step 1) Create a puppet class
root@jump_host /# cd /etc/puppetlabs/code/environments/production/manifests
root@jump_host /etc/puppetlabs/code/environments/production/manifests# vi official.pp
The authenticity of host 'stapp03 (172.16.238.12)' can't be established.
ECDSA key fingerprint is SHA256:E3zIVPZa3MQk87dWVRtHnBQBIjuhkJMs66WRzrrYlNU.
ECDSA key fingerprint is MD5:4c:d5:a8:ee:3a:42:ee:6e:19:a2:c6:ab:63:b4:5f:c4.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'stapp03,172.16.238.12' (ECDSA) to the list of known hosts.
banner@stapp03's password:
[banner@stapp03 ~]$ sudo su -
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
[sudo] password for banner:
[root@stapp03 ~]# puppet agent -tv
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Retrieving locales
Info: Caching catalog for stapp03.stratos.xfusioncorp.com
Info: Applying configuration version '1622067074'
Notice: /Stage[main]/File_permissions/File[/opt/finance/beta.txt]/content:
--- /opt/finance/beta.txt 2021-05-26 22:04:09.896000000 +0000
+++ /tmp/puppet-file20210526-194-sqzdqw 2021-05-26 22:11:14.572000000 +0000
@@ -0,0 +1 @@
+Welcome to xFusionCorp Industries!
\ No newline at end of file
Info: Computing checksum on file /opt/finance/beta.txt
Info: /Stage[main]/File_permissions/File[/opt/finance/beta.txt]: Filebucketed /opt/finance/beta.txt to puppet with sum d41d8cd98f00b204e9800998ecf8427e
Notice: /Stage[main]/File_permissions/File[/opt/finance/beta.txt]/content: content changed '{md5}d41d8cd98f00b204e9800998ecf8427e' to '{md5}b899e8a90bbb38276f6a00012e1956fe'
Notice: /Stage[main]/File_permissions/File[/opt/finance/beta.txt]/mode: mode changed '0644' to '0777'
Notice: Applied catalog in 0.08 seconds
[root@stapp03 ~]#
Saturday, 22 May 2021
Docker Copy Operations
Task:
The Nautilus DevOps team has some conditional data present on App Server 1 in Stratos Datacenter. There is a container ubuntu_latest running on the same server. We received a request to copy some of the data from the docker host to the container. Below are more details about the task.
On App Server 1 in Stratos Datacenter copy an encrypted file /tmp/nautilus.txt.gpg from docker host to ubuntu_latest container (running on same server) in /tmp/ location. Please do not try to modify this file in any way.
Friday, 21 May 2021
Deploy Nginx Web Server on Kubernetes Cluster
Some of the Nautilus team developers are developing a static website and they want to deploy it on Kubernetes cluster. They want it to be highly available and scalable. Therefore, based on the requirements, the DevOps team has decided to create deployment for it with multiple replicas. Below you can find more details about it:
Create a deployment using nginx image with latest tag only and remember to mention tag i.e nginx:latest and name it as nginx-deployment. App labels should be app: nginx-app and type: front-end. The container should be named as nginx-container; also make sure replica counts are 3.
Also create a service named nginx-service and type NodePort. The targetPort should be 80 and nodePort should be 30011.
Note: The kubectl utility on jump_host has been configured to work with the kubernetes cluster.
Wednesday, 19 May 2021
Rolling Updates And Rolling Back Deployments in Kubernetes
There is a production deployment planned for next week. The Nautilus DevOps team wants to test the deployment update and rollback on Dev environment first so that they can identify the risks in advance. Below you can find more details about the plan they want to execute.
Create a namespace devops. Create a deployment called httpd-deploy under this new namespace, It should have one container called httpd, use httpd:2.4.27 image and 6 replicas. The deployment should use RollingUpdate strategy with maxSurge=1, and maxUnavailable=2.
Next upgrade the deployment to version httpd:2.4.43 using a rolling update.
Finally, once all pods are updated undo the update and roll back to the previous/original version.
Note: The kubectl utility on jump_host has been configured to work with the kubernetes cluster.