Search This Blog

Thursday, 7 July 2022

Kubernetes Cluster design

Here are some pros and cons of having Single Cluster and Multiple kubernetes clusters. 

1. Single Cluster


Pros: 

  1. If you have only one Kubernetes cluster, you need to have only one copy of all the resources that are needed to run and manage a Kubernetes cluster. But this also includes other cluster-wide services, such as load balancers, Ingress controllers, authentication, logging, and monitoring. If you have only a single cluster, you can reuse these services for all your workloads, and you don't need to have multiple copies of them for multiple clusters.
  2. As a consequence of the above point, fewer clusters are usually cheaper, because the resource overhead with larger numbers of clusters costs money
  3. Administrating a single cluster is easier than administrating many clusters.

Cons: 

  1. If you have only one cluster and if that cluster breaks, then all your workloads are down. 
  2. A single incident like this can produce major damage across all your workloads if you have only a single shared cluster.
  3. If multiple apps run in the same Kubernetes cluster, this means that these apps share the hardware, network, and operating system on the nodes of the cluster. This may be an issue from a security point of view. 
  4. If you use a single cluster for all your workload, this cluster will probably be rather large (in terms of nodes and Pods). The reason is that larger clusters put a higher strain on the Kubernetes control plane, which requires careful planning to keep the cluster functional and efficient.

2. Many small single-use clusters


Pros: 

With this approach, you use a separate Kubernetes cluster for every deployment unit

  1. If a cluster breaks, the damage is limited to only the workloads that run on this cluster . All the other workloads are unaffected.
  2. The workloads running in the individual clusters don't share any resources, such as CPU, memory, the operating system, network, or other services.
  3. This provides strong isolation between unrelated applications, which may be a big plus for the security of these applications.
  4. If every cluster runs only a small set of workload, then fewer people need to have access to this cluster.

Cons: 

  1. Each Kubernetes cluster requires a set of management resources, such as the master nodes, control plane components, monitoring and logging solutions.
  2. If you have many small clusters, you have to sacrifice a higher percentage of the total resources for these management functions.

3. Cluster per environment


With this approach, you have a separate cluster for each environment:

Pros: 

  1. In general, this approach isolates all the environments from each other but, in practice, this especially matters for the prod environment.
  2. The production versions of your app are now not affected by whatever happens in any of the other clusters and application environments.
  3. Nobody really needs to do work on the prod cluster, so you can make access to it very restrictive.
  4. You can go as far as not granting access to the prod cluster to any humans at all — the deployments to this cluster can be made through an automated CI/CD tool.
  5. This should greatly decrease the risk of human error in the prod cluster, which is where it matters most

Cons:

  1. The main disadvantage of this approach is the missing hardware and resource isolation between apps.
  2. Unrelated apps share cluster resources, such as the operating system kernel, CPU, memory, and several other services.
  3. If an app has special requirements, then these requirements must be satisfied in all clusters.



Tuesday, 26 April 2022

Infrastructure as Code (IaC)

 What is IaC ?

Infrastructure as Code (IaC) is the managing and provisioning of infrastructure through code instead of through manual processes.

With IaC, configuration files are created that contain your infrastructure specifications, which makes it easier to edit and distribute configurations. It also ensures that you provision the same environment every time. By codifying and documenting your configuration specifications, IaC aids configuration management and helps you to avoid undocumented, ad-hoc configuration changes.

Version control is an important part of IaC, and your configuration files should be under source control just like any other software source code file. Deploying your infrastructure as code also means that you can divide your infrastructure into modular components that can then be combined in different ways through automation.

Automating infrastructure provisioning with IaC means that developers don’t need to manually provision and manage servers, operating systems, storage, and other infrastructure components each time they develop or deploy an application. Codifying your infrastructure gives you a template to follow for provisioning. 

Benefits of IaC ?

Provisioning infrastructure has historically been a time consuming and costly manual process. Now infrastructure management has moved away from physical hardware in data centers, though this still may be a component for your organization, to virtualization, containers, and cloud computing.

With cloud computing, the number of infrastructure components has grown, more applications are being released to production on a daily basis, and infrastructure needs to be able to be spun up, scaled, and taken down frequently. Without an IaC practice in place, it becomes increasingly difficult to manage the scale of today’s infrastructure.

IaC can help your organization manage IT infrastructure needs while also improving consistency and reducing errors and manual configuration.

Benefits:

  • Cost reduction
  • Increase in speed of deployments
  • Reduce errors 
  • Improve infrastructure consistency
  • Eliminate configuration drift
  • IaC tool examples

Server automation and configuration management tools can often be used to achieve IaC. There are also solutions specifically for IaC. 

These are a few popular choices:

  • Chef
  • Puppet
  • Red Hat Ansible Automation Platform
  • Saltstack
  • Terraform 
  • AWS CloudFormation

Monday, 25 April 2022

Terraform basics

What is Terraform ?

HashiCorp Terraform is an infrastructure as code tool that lets you define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share. You can then use a consistent workflow to provision and manage all of your infrastructure throughout its lifecycle. Terraform can manage low-level components like compute, storage, and networking resources, as well as high-level components like DNS entries and SaaS features.

How does Terraform works ?

Terraform creates and manages resources on cloud platforms and other services through their application programming interfaces (APIs). Providers enable Terraform to work with virtually any platform or service with an accessible API.


HashiCorp and the Terraform community have already written more than 1700 providers to manage thousands of different types of resources and services, and this number continues to grow. You can find all publicly available providers on the Terraform Registry, including Amazon Web Services (AWS), Azure, Google Cloud Platform (GCP), Kubernetes, Helm, GitHub, Splunk, DataDog, and many more.

The core Terraform workflow consists of three stages:

  • Write: You define resources, which may be across multiple cloud providers and services. For example, you might create a configuration to deploy an application on virtual machines in a Virtual Private Cloud (VPC) network with security groups and a load balancer.
  • Plan: Terraform creates an execution plan describing the infrastructure it will create, update, or destroy based on the existing infrastructure and your configuration.
  • Apply: On approval, Terraform performs the proposed operations in the correct order, respecting any resource dependencies. For example, if you update the properties of a VPC and change the number of virtual machines in that VPC, Terraform will recreate the VPC before scaling the virtual machines.

 

Why Terraform ?

Manage any infrastructure

Find providers for many of the platforms and services you already use in the Terraform Registry. You can also write your own. Terraform takes an immutable approach to infrastructure, reducing the complexity of upgrading or modifying your services and infrastructure.

Track your infrastructure

Terraform generates a plan and prompts you for your approval before modifying your infrastructure. It also keeps track of your real infrastructure in a state file, which acts as a source of truth for your environment. Terraform uses the state file to determine the changes to make to your infrastructure so that it will match your configuration.

Automate changes

Terraform configuration files are declarative, meaning that they describe the end state of your infrastructure. You do not need to write step-by-step instructions to create resources because Terraform handles the underlying logic. Terraform builds a resource graph to determine resource dependencies and creates or modifies non-dependent resources in parallel. This allows Terraform to provision resources efficiently.

Standardize configurations

Terraform supports reusable configuration components called modules that define configurable collections of infrastructure, saving time and encouraging best practices. You can use publicly available modules from the Terraform Registry, or write your own.

Collaborate

Since your configuration is written in a file, you can commit it to a Version Control System (VCS) and use Terraform Cloud to efficiently manage Terraform workflows across teams. Terraform Cloud runs Terraform in a consistent, reliable environment and provides secure access to shared state and secret data, role-based access controls, a private registry for sharing both modules and providers, and more.

Terraform

Wednesday, 6 April 2022

Create a ReplicaSet in kubernetes

 Task:

The Nautilus DevOps team is going to deploy some applications on kubernetes cluster as they are planning to migrate some of their existing applications there. Recently one of the team members has been assigned a task to write a template as per details mentioned below:

Create a ReplicaSet using nginx image with latest tag only and remember to mention tag i.e nginx:latest and name it as nginx-replicaset.

Labels app should be nginx_app, labels type should be front-end. The container should be named as nginx-container; also make sure replicas counts are 4.

Note: The kubectl utility on jump_host has been configured to work with the kubernetes cluster.

Step 1) Create an yaml file with given specifications. 

thor@jump_host ~$ cat rs.yaml 

apiVersion: apps/v1

kind: ReplicaSet

metadata:

  name: nginx-replicaset

  labels:

    app: nginx_app

    type: front-end

spec:

  replicas: 4

  selector:

    matchLabels:

      type: front-end

  template:

    metadata:

      labels:

        type: front-end

    spec:

      containers:

      - name: nginx-container

        image: nginx:latest

Step 2) Deploy the replicaset yaml file

thor@jump_host ~$ kubectl create -f rs.yaml 

replicaset.apps/nginx-replicaset created

Step 3) Validate the replica set

thor@jump_host ~$ kubectl get rs

NAME               DESIRED   CURRENT   READY   AGE

nginx-replicaset   4         4         4       40s

thor@jump_host ~$ kubectl get pods

NAME                     READY   STATUS    RESTARTS   AGE

nginx-replicaset-mgkwf   1/1     Running   0          48s

nginx-replicaset-ttj9l   1/1     Running   0          48s

nginx-replicaset-vp59p   1/1     Running   0          48s

nginx-replicaset-zkbhl   1/1     Running   0          48s


Wednesday, 16 March 2022

Deploy MySQL on Kubernetes

 Task:

A new MySQL server needs to be deployed on Kubernetes cluster. The Nautilus DevOps team was working on to gather the requirements. Recently they were able to finalize the requirements and shared them with the team members to start working on it. Below you can find the details:


1.) Create a PersistentVolume mysql-pv, its capacity should be 250Mi, set other parameters as per your preference.

2.) Create a PersistentVolumeClaim to request this PersistentVolume storage. Name it as mysql-pv-claim and request a 250Mi of storage. Set other parameters as per your preference.

3.) Create a deployment named mysql-deployment, use any mysql image as per your preference. Mount the PersistentVolume at mount path /var/lib/mysql.

4.) Create a NodePort type service named mysql and set nodePort to 30007.

5.) Create a secret named mysql-root-pass having a key pair value, where key is password and its value is YUIidhb667, create another secret named mysql-user-pass having some key pair values, where frist key is username and its value is kodekloud_tim, second key is password and value is TmPcZjtRQx, create one more secret named mysql-db-url, key name is database and value is kodekloud_db4

6.) Define some Environment variables within the container:

a) name: MYSQL_ROOT_PASSWORD, should pick value from secretKeyRef name: mysql-root-pass and key: password

b) name: MYSQL_DATABASE, should pick value from secretKeyRef name: mysql-db-url and key: database

c) name: MYSQL_USER, should pick value from secretKeyRef name: mysql-user-pass key key: username

d) name: MYSQL_PASSWORD, should pick value from secretKeyRef name: mysql-user-pass and key: password

Note: The kubectl utility on jump_host has been configured to work with the kubernetes cluster.

Solution : 

1.) Create a PersistentVolume mysql-pv, its capacity should be 250Mi, set other parameters as per your preference. 

thor@jump_host ~$ cat pv.yaml
apiVersion: v1
kind: PersistentVolume            
metadata:
  name: mysql-pv
  labels:
    type: local
spec:
  storageClassName: standard      
  capacity:
    storage: 250Mi
  accessModes:
    - ReadWriteOnce
  hostPath:                       
    path: "/var/lib/mysql"
  persistentVolumeReclaimPolicy: Retain  
thor@jump_host ~$ kubectl create -f pv.yaml
persistentvolume/mysql-pv created

2.) Create a PersistentVolumeClaim to request this PersistentVolume storage. Name it as mysql-pv-claim and request a 250Mi of storage. Set other parameters as per your preference.

thor@jump_host ~$ cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:                          
  name: mysql-pv-claim
  labels:
    app: mysql-app
spec:                              
  storageClassName: standard       
  accessModes:
    - ReadWriteOnce                
  resources:
    requests:
      storage: 250Mi
thor@jump_host ~$ kubectl create -f pvc.yaml
persistentvolumeclaim/mysql-pv-claim created

5.) Create a secret named mysql-root-pass having a key pair value, where key is password and its value is YUIidhb667, create another secret named mysql-user-pass having some key pair values, where frist key is username and its value is kodekloud_aim, second key is password and value is YchZHRcLkL, create one more secret named mysql-db-url, key name is database and value is kodekloud_db6

 thor@jump_host ~$ kubectl create secret generic mysql-root-pass --from-literal=password=YUIidhb667secret/mysql-root-pass created
thor@jump_host ~$ kubectl create secret generic mysql-user-pass --from-literal=username=kodekloud_aim --from-literal=password=YUIidhb667
secret/mysql-user-pass created
thor@jump_host ~$ kubectl create secret generic mysql-db-url --from-literal=database=kodekloud_db9secret/mysql-db-url created

4.) Create a NodePort type service named mysql and set nodePort to 30007.

thor@jump_host ~$ cat svc.yaml
apiVersion: v1                    
kind: Service                      
metadata:
  name: mysql         
  labels:             
    app: mysql-app
spec:
  type: NodePort
  ports:
    - targetPort: 3306
      port: 3306
      nodePort: 30007
  selector:    
    app: mysql-app
    tier: mysql
thor@jump_host ~$ kubectl create -f svc.yaml
service/mysql created
thor@jump_host ~$  

 

3.) Create a deployment named mysql-deployment, use any mysql image as per your preference. Mount the PersistentVolume at mount path /var/lib/mysql.

&&

6.) Define some Environment variables within the container

thor@jump_host ~$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment            
metadata:
  name: mysql-deployment       
  labels:                       
    app: mysql-app
spec:
  selector:
    matchLabels:
      app: mysql-app
      tier: mysql
  strategy:
    type: Recreate
  template:                    
    metadata:
      labels:                  
        app: mysql-app
        tier: mysql
    spec:                       
      containers:
      - image: mysql
        name: mysql
        env:                        
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:                
            secretKeyRef:
              name: mysql-root-pass
              key: password
        - name: MYSQL_DATABASE
          valueFrom:
            secretKeyRef:
              name: mysql-db-url
              key: database
        - name: MYSQL_USER
          valueFrom:
            secretKeyRef:
              name: mysql-user-pass
              key: username
        - name: MYSQL_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-user-pass
              key: password
        ports:
        - containerPort: 3306        
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:                       
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim
thor@jump_host ~$ kubectl create -f deployment.yaml
deployment.apps/mysql-deployment created

Validate all the Services:

thor@jump_host ~$ kubectl get all
NAME                                    READY   STATUS    RESTARTS   AGE
pod/mysql-deployment-84f954fc46-hxg46   1/1     Running   0          2m21s

NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
service/kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP          70m
service/mysql        NodePort    10.96.75.38   <none>        3306:30007/TCP   6m17s

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mysql-deployment   1/1     1            1           2m21s

NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/mysql-deployment-84f954fc46   1         1         1       2m21s

 


 

Friday, 4 March 2022

Kubernetes Interview Questions

1) How do you automate Kubernetes deployments ?

Developer checks in the application code into the code repository and then we have to Dockerise that application code, create a container image and put that container into a container Image Repository.
From there, we have to deploy that container image to a Kubernetes cluster where the image will run as a container. The first part is called the build and the next part is called deploy. Build a docker image using a Dockerfile, after that, you save the container image into a container Image Repository. we deploy the image into the Kubernetes cluster using Jenkins pipeline.

2) How do you secure your Kubernetes app ?


when it comes to security, there are two aspects for Kubernetes security. One is the security of your application running on the cluster and the other is DevSecOps is basically DevOps plus security, security of the container DevOps lifecycle so for the application security, you secure your pod, namespace, node using role based access control and IRSA, etc.

3) How do you cost/performance optimize Kubernetes app ?

When it comes to Kubernetes cost, first component is controlplane cost. This cost is fixed. Most of your Kubernetes cost will come from your worker node, number of worker nodes and the types of worker nodes. So how does the number of worker nodes get chosen? So basically, when you define your container image in a pod spec, you will define pod resource specification like how much CPU how much memory you want this container image to use. This is where a lot of cost optimization comes in. Because most of the time you will see unused CPU and memory allocation. So just like your compute instance where you will allocate more CPU and RAM for your compute instance like select high Tier even though you are not using it.
So the way to optimize cost and performance is to detect the CPU and memory wastage from the pod. In Kubernetes, you utilize metric server. So once you install metric server metrics server can tell you how much CPU and memory is being utilized in average, like gives you like 10 hours per day etc. Now if you want to do this manually using all this data it will be a difficult task. So you should utilize some tools which gathers this metric server data and then gives you actionable cost and performance insights. So some of these tools are CloudWatch container insights. So Cloud watch container insights works with EKS, which gets the data and we'll show you Top 10 memory intensive parts etc. And then you can dive deep and optimize the CPU memory specification. Some of the other third party tools kubecost. It actually gives you like the dollar amount how much you are wasting and gives you a few reduce this allocation you will save this much money. CloudHealth super popular third party to Kubernetes resource report. These are the few popular ones, but there are many more. So its important to identify the biggest cost component mentioned that unused CPU memory and then go into the thought process that how do you detect CPU memory using this tools.

4) Tell me about a challenge you faced in Kubernetes ? 

In Kubernetes each pod uses a pod IP address from your VPC. So as you grow as your application grows, there's a chance that with a lot of concurrent pods running at the same time, it will run out of address in your VPC. That's one of the challenge. You can add additional subnet afterwards even after a cluster is defined.

5) How do you scale kubernetes ?

There are two main main ways to communicate app one is Horizontal Pod Autoscaler and then next is the Cluster Autoscaler where the number of nodes increases. HPA Cluster Autoscaler. So let's say you have two worker nodes and your application is running. worker nodes utilization is let's say like 50% or something as your application traffic increases, you spawn more and more pods on in those two worker nodes right. So at some point, these two worker nodes will be at full capacity. So to scale more the Cluster Autoscaler will create a more easy to work. This process takes a little bit of time, right depends on your AMI to just to grab the AMI, the EC2 has to come up. So there will be a little bit of latency. What if your application is like a super critical application, you cannot afford that kind of latency. So with cluster over provisioning, you will bring up all these two workers up. so let's say even though two worker nodes are being used with your real application pods,  depending on your application, and for the other worker nodes, you run this fake pods call the pause pods. And let's say slowly the first two worker nodes goes to 100% utilization and when the traffic increases more, but easy tools are already up and running so your application don't need to spend any more time spinning up additional ec2's. All it will do is replace with your actual application pods.

6) How do you upgrade Kubernetes cluster on cloud provider ?
 


Step 1) Upgrade master/control plane nodes first. We can upgrade master/control plane nodes without any downtime. The master nodes upgrade will have impact on services which are running on worker nodes giving no immediate scheuleds in Kubernetes cluster

Step 2) There are two types of upgrades available. One is In-Place upgrade and other one is out of place upgrade. 

In-Place Upgrade: 

You can upgrade the version of Kubernetes running on worker nodes in a node pool by specifying a more recent Kubernetes version for the existing node pool. For each worker node, you first drain it to prevent new pods starting and to delete existing pods. You then terminate the worker node so that a new worker node is started, running the more recent Kubernetes version you specified. When new worker nodes are started in the existing node pool, they run the more recent Kubernetes version you specified.

Out of Place Upgrade:

You can 'upgrade' the version of Kubernetes running on worker nodes in a node pool by replacing the original node pool with a new node pool that has new worker nodes running the appropriate Kubernetes version. Having drained existing worker nodes in the original node pool to prevent new pods starting and to delete existing pods, you can then delete the original node pool. When new worker nodes are started in the new node pool, they run the more recent Kubernetes version you specified.

The preferred way is out of place upgrade. 

Steps for upgrading worker nodes using out of place upgrade method

Node Pools tab, and then click Add Node Pool to create a new node pool and specify the required Kubernetes version for its worker nodes.

For the first worker node in the original node pool, prevent new pods from starting and delete existing pods by entering: drain node < node name >

Repeat the previous step for each remaining worker node in the node pool, until all the worker nodes have been drained from the original node pool.

On the Cluster page, display the Node Pools tab, and then select Delete Node Pool from the Actions menu beside the original node pool.

If you have a Load Balancer  configured for your worker nodes then you might have to switch the loadbalancer to point to the new IP addresses configured on new worker nodes.

 
7) Can you explain Kubernetes architecture ?

There are mainly two different types of nodes in Kubernetes cluster Master node and worker node. In Master we have etcd cluster, controller, Scheduler and kube-api and on the worker node we have container engine, kubelet and kube-proxy.
ectd cluster is database and it stores the information in the key value format. The controller(s) for that resource are responsible for making the current state come closer to that desired state. The scheduler schedules the work to different worker nodes.It has the resource usage information for each worker node. The Kube API Server is the primary management component of the Kubernetes. This is responsible for orchestrating all operations within the cluster. It exposes the Kubernetes API, which external users use to perform management operations on the cluster and the various controllers to monitor the state of the cluster and make necessary changes as required. We use docker as container engine for kubenetes to run the containers. Kubelet is an agent that runs on each worker node and communicates with the master node. Kube proxy is a proxy service that runs on each node and helps make services available to the external host.

Friday, 11 February 2022

Kubernetes Cluster Architecture and Components

A Kubernetes cluster consists of a set of control plane (master ) and worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.

The worker node(s) host the Pods that are the components of the application workload. The control plane manages the worker nodes and the Pods in the cluster. In production environments, the control plane usually runs across multiple computers and a cluster usually runs multiple nodes, providing fault-tolerance and high availability.


Control Plane Components

The control plane's components make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster events (for example, starting up a new pod when a deployment's replicas field is unsatisfied).
Control plane components can be run on any machine in the cluster. However, for simplicity, set up scripts typically start all control plane components on the same machine, and do not run user containers on this machine. 

kube-apiserver
The API server is a component of the Kubernetes control plane that exposes the Kubernetes API. The API server is the front end for the Kubernetes control plane.
The main implementation of a Kubernetes API server is kube-apiserver. kube-apiserver is designed to scale horizontally—that is, it scales by deploying more instances. You can run several instances of kube-apiserver and balance traffic between those instances. 

etcd
Consistent and highly-available key value store used as Kubernetes' backing store for all cluster data.
If your Kubernetes cluster uses etcd as its backing store, make sure you have a back up plan for those data.

kube-scheduler
Control plane component that watches for newly created Pods with no assigned node, and selects a node for them to run on.
Factors taken into account for scheduling decisions include: individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, and deadlines. 

kube-controller-manager
Control plane component that runs controller processes.
Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.
Some types of these controllers are:
    •    Node controller: Responsible for noticing and responding when nodes go down.
    •    Job controller: Watches for Job objects that represent one-off tasks, then creates Pods to run those tasks to completion.
    •    Endpoints controller: Populates the Endpoints object (that is, joins Services & Pods).
    •    Service Account & Token controllers: Create default accounts and API access tokens for new namespaces. 

cloud-controller-manager
A Kubernetes control plane component that embeds cloud-specific control logic. The cloud controller manager lets you link your cluster into your cloud provider's API, and separates out the components that interact with that cloud platform from components that only interact with your cluster.
The cloud-controller-manager only runs controllers that are specific to your cloud provider. If you are running Kubernetes on your own premises, or in a learning environment inside your own PC, the cluster does not have a cloud controller manager.
As with the kube-controller-manager, the cloud-controller-manager combines several logically independent control loops into a single binary that you run as a single process. You can scale horizontally (run more than one copy) to improve performance or to help tolerate failures.
The following controllers can have cloud provider dependencies:
    •    Node controller: For checking the cloud provider to determine if a node has been deleted in the cloud after it stops responding
    •    Route controller: For setting up routes in the underlying cloud infrastructure
    •    Service controller: For creating, updating and deleting cloud provider load balancers

Node Components

Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment. 

kubelet
An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn't manage containers which were not created by Kubernetes. 

kube-proxy
kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
kube-proxy uses the operating system packet filtering layer if there is one and it's available. Otherwise, kube-proxy forwards the traffic itself. 

Container runtime
The container runtime is the software that is responsible for running containers.
Kubernetes supports container runtimes such as containerd, CRI-O, and any other implementation of the Kubernetes CRI (Container Runtime Interface).

Pods
Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.

Basics of Kubernetes

 What is Kubernetes?

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
The name Kubernetes originates from Greek, meaning helmsman or pilot. K8s as an abbreviation results from counting the eight letters between the "K" and the "s". Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google's experience running production workloads at scale with best-of-breed ideas and practices from the community.

Kubernetes provides you with:

Service discovery and load balancing Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.

Storage orchestration Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.

Automated rollouts and rollbacks You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.

Automatic bin packing You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.

Self-healing Kubernetes restarts containers that fail, replaces containers, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve.

Secret and configuration management Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.
 

Create a Docker Network

 Task:

The Nautilus DevOps team needs to set up several docker environments for different applications. One of the team members has been assigned a ticket where he has been asked to create some docker networks to be used later. Complete the task based on the following ticket description:

a. Create a docker network named as news on App Server 1 in Stratos DC.
b. Configure it to use macvlan drivers.
c. Set it to use subnet 172.28.0.0/24 and iprange 172.28.0.1/24.

thor@jump_host ~$ ssh tony@stapp01
[tony@stapp01 ~]$ sudo su -

[root@stapp01 ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@stapp01 ~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
1b1a8c1ac8b5   bridge    bridge    local
d8defea600a4   host      host      local
f2508f2f61ea   none      null      local

[root@stapp01 ~]# docker network create -d macvlan --subnet=172.28.0.0/24 --ip-range=172.28.0.1/24 news
f98779e735fe2dc56c1e7a93a9e615f78bc5324d283ff1478f64b0e0400a7389

[root@stapp01 ~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
1b1a8c1ac8b5   bridge    bridge    local
d8defea600a4   host      host      local
f98779e735fe   news      macvlan   local
f2508f2f61ea   none      null      local
[root@stapp01 ~]# docker network inspect news
[
    {
        "Name": "news",
        "Id": "f98779e735fe2dc56c1e7a93a9e615f78bc5324d283ff1478f64b0e0400a7389",
        "Created": "2022-02-11T22:14:03.425043109Z",
        "Scope": "local",
        "Driver": "macvlan",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.28.0.0/24",
                    "IPRange": "172.28.0.1/24"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]