Search This Blog

Wednesday 16 March 2022

Deploy MySQL on Kubernetes

 Task:

A new MySQL server needs to be deployed on Kubernetes cluster. The Nautilus DevOps team was working on to gather the requirements. Recently they were able to finalize the requirements and shared them with the team members to start working on it. Below you can find the details:


1.) Create a PersistentVolume mysql-pv, its capacity should be 250Mi, set other parameters as per your preference.

2.) Create a PersistentVolumeClaim to request this PersistentVolume storage. Name it as mysql-pv-claim and request a 250Mi of storage. Set other parameters as per your preference.

3.) Create a deployment named mysql-deployment, use any mysql image as per your preference. Mount the PersistentVolume at mount path /var/lib/mysql.

4.) Create a NodePort type service named mysql and set nodePort to 30007.

5.) Create a secret named mysql-root-pass having a key pair value, where key is password and its value is YUIidhb667, create another secret named mysql-user-pass having some key pair values, where frist key is username and its value is kodekloud_tim, second key is password and value is TmPcZjtRQx, create one more secret named mysql-db-url, key name is database and value is kodekloud_db4

6.) Define some Environment variables within the container:

a) name: MYSQL_ROOT_PASSWORD, should pick value from secretKeyRef name: mysql-root-pass and key: password

b) name: MYSQL_DATABASE, should pick value from secretKeyRef name: mysql-db-url and key: database

c) name: MYSQL_USER, should pick value from secretKeyRef name: mysql-user-pass key key: username

d) name: MYSQL_PASSWORD, should pick value from secretKeyRef name: mysql-user-pass and key: password

Note: The kubectl utility on jump_host has been configured to work with the kubernetes cluster.

Solution : 

1.) Create a PersistentVolume mysql-pv, its capacity should be 250Mi, set other parameters as per your preference. 

thor@jump_host ~$ cat pv.yaml
apiVersion: v1
kind: PersistentVolume            
metadata:
  name: mysql-pv
  labels:
    type: local
spec:
  storageClassName: standard      
  capacity:
    storage: 250Mi
  accessModes:
    - ReadWriteOnce
  hostPath:                       
    path: "/var/lib/mysql"
  persistentVolumeReclaimPolicy: Retain  
thor@jump_host ~$ kubectl create -f pv.yaml
persistentvolume/mysql-pv created

2.) Create a PersistentVolumeClaim to request this PersistentVolume storage. Name it as mysql-pv-claim and request a 250Mi of storage. Set other parameters as per your preference.

thor@jump_host ~$ cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:                          
  name: mysql-pv-claim
  labels:
    app: mysql-app
spec:                              
  storageClassName: standard       
  accessModes:
    - ReadWriteOnce                
  resources:
    requests:
      storage: 250Mi
thor@jump_host ~$ kubectl create -f pvc.yaml
persistentvolumeclaim/mysql-pv-claim created

5.) Create a secret named mysql-root-pass having a key pair value, where key is password and its value is YUIidhb667, create another secret named mysql-user-pass having some key pair values, where frist key is username and its value is kodekloud_aim, second key is password and value is YchZHRcLkL, create one more secret named mysql-db-url, key name is database and value is kodekloud_db6

 thor@jump_host ~$ kubectl create secret generic mysql-root-pass --from-literal=password=YUIidhb667secret/mysql-root-pass created
thor@jump_host ~$ kubectl create secret generic mysql-user-pass --from-literal=username=kodekloud_aim --from-literal=password=YUIidhb667
secret/mysql-user-pass created
thor@jump_host ~$ kubectl create secret generic mysql-db-url --from-literal=database=kodekloud_db9secret/mysql-db-url created

4.) Create a NodePort type service named mysql and set nodePort to 30007.

thor@jump_host ~$ cat svc.yaml
apiVersion: v1                    
kind: Service                      
metadata:
  name: mysql         
  labels:             
    app: mysql-app
spec:
  type: NodePort
  ports:
    - targetPort: 3306
      port: 3306
      nodePort: 30007
  selector:    
    app: mysql-app
    tier: mysql
thor@jump_host ~$ kubectl create -f svc.yaml
service/mysql created
thor@jump_host ~$  

 

3.) Create a deployment named mysql-deployment, use any mysql image as per your preference. Mount the PersistentVolume at mount path /var/lib/mysql.

&&

6.) Define some Environment variables within the container

thor@jump_host ~$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment            
metadata:
  name: mysql-deployment       
  labels:                       
    app: mysql-app
spec:
  selector:
    matchLabels:
      app: mysql-app
      tier: mysql
  strategy:
    type: Recreate
  template:                    
    metadata:
      labels:                  
        app: mysql-app
        tier: mysql
    spec:                       
      containers:
      - image: mysql
        name: mysql
        env:                        
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:                
            secretKeyRef:
              name: mysql-root-pass
              key: password
        - name: MYSQL_DATABASE
          valueFrom:
            secretKeyRef:
              name: mysql-db-url
              key: database
        - name: MYSQL_USER
          valueFrom:
            secretKeyRef:
              name: mysql-user-pass
              key: username
        - name: MYSQL_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-user-pass
              key: password
        ports:
        - containerPort: 3306        
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:                       
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim
thor@jump_host ~$ kubectl create -f deployment.yaml
deployment.apps/mysql-deployment created

Validate all the Services:

thor@jump_host ~$ kubectl get all
NAME                                    READY   STATUS    RESTARTS   AGE
pod/mysql-deployment-84f954fc46-hxg46   1/1     Running   0          2m21s

NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
service/kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP          70m
service/mysql        NodePort    10.96.75.38   <none>        3306:30007/TCP   6m17s

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mysql-deployment   1/1     1            1           2m21s

NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/mysql-deployment-84f954fc46   1         1         1       2m21s

 


 

Friday 4 March 2022

Kubernetes Interview Questions

1) How do you automate Kubernetes deployments ?

Developer checks in the application code into the code repository and then we have to Dockerise that application code, create a container image and put that container into a container Image Repository.
From there, we have to deploy that container image to a Kubernetes cluster where the image will run as a container. The first part is called the build and the next part is called deploy. Build a docker image using a Dockerfile, after that, you save the container image into a container Image Repository. we deploy the image into the Kubernetes cluster using Jenkins pipeline.

2) How do you secure your Kubernetes app ?


when it comes to security, there are two aspects for Kubernetes security. One is the security of your application running on the cluster and the other is DevSecOps is basically DevOps plus security, security of the container DevOps lifecycle so for the application security, you secure your pod, namespace, node using role based access control and IRSA, etc.

3) How do you cost/performance optimize Kubernetes app ?

When it comes to Kubernetes cost, first component is controlplane cost. This cost is fixed. Most of your Kubernetes cost will come from your worker node, number of worker nodes and the types of worker nodes. So how does the number of worker nodes get chosen? So basically, when you define your container image in a pod spec, you will define pod resource specification like how much CPU how much memory you want this container image to use. This is where a lot of cost optimization comes in. Because most of the time you will see unused CPU and memory allocation. So just like your compute instance where you will allocate more CPU and RAM for your compute instance like select high Tier even though you are not using it.
So the way to optimize cost and performance is to detect the CPU and memory wastage from the pod. In Kubernetes, you utilize metric server. So once you install metric server metrics server can tell you how much CPU and memory is being utilized in average, like gives you like 10 hours per day etc. Now if you want to do this manually using all this data it will be a difficult task. So you should utilize some tools which gathers this metric server data and then gives you actionable cost and performance insights. So some of these tools are CloudWatch container insights. So Cloud watch container insights works with EKS, which gets the data and we'll show you Top 10 memory intensive parts etc. And then you can dive deep and optimize the CPU memory specification. Some of the other third party tools kubecost. It actually gives you like the dollar amount how much you are wasting and gives you a few reduce this allocation you will save this much money. CloudHealth super popular third party to Kubernetes resource report. These are the few popular ones, but there are many more. So its important to identify the biggest cost component mentioned that unused CPU memory and then go into the thought process that how do you detect CPU memory using this tools.

4) Tell me about a challenge you faced in Kubernetes ? 

In Kubernetes each pod uses a pod IP address from your VPC. So as you grow as your application grows, there's a chance that with a lot of concurrent pods running at the same time, it will run out of address in your VPC. That's one of the challenge. You can add additional subnet afterwards even after a cluster is defined.

5) How do you scale kubernetes ?

There are two main main ways to communicate app one is Horizontal Pod Autoscaler and then next is the Cluster Autoscaler where the number of nodes increases. HPA Cluster Autoscaler. So let's say you have two worker nodes and your application is running. worker nodes utilization is let's say like 50% or something as your application traffic increases, you spawn more and more pods on in those two worker nodes right. So at some point, these two worker nodes will be at full capacity. So to scale more the Cluster Autoscaler will create a more easy to work. This process takes a little bit of time, right depends on your AMI to just to grab the AMI, the EC2 has to come up. So there will be a little bit of latency. What if your application is like a super critical application, you cannot afford that kind of latency. So with cluster over provisioning, you will bring up all these two workers up. so let's say even though two worker nodes are being used with your real application pods,  depending on your application, and for the other worker nodes, you run this fake pods call the pause pods. And let's say slowly the first two worker nodes goes to 100% utilization and when the traffic increases more, but easy tools are already up and running so your application don't need to spend any more time spinning up additional ec2's. All it will do is replace with your actual application pods.

6) How do you upgrade Kubernetes cluster on cloud provider ?
 


Step 1) Upgrade master/control plane nodes first. We can upgrade master/control plane nodes without any downtime. The master nodes upgrade will have impact on services which are running on worker nodes giving no immediate scheuleds in Kubernetes cluster

Step 2) There are two types of upgrades available. One is In-Place upgrade and other one is out of place upgrade. 

In-Place Upgrade: 

You can upgrade the version of Kubernetes running on worker nodes in a node pool by specifying a more recent Kubernetes version for the existing node pool. For each worker node, you first drain it to prevent new pods starting and to delete existing pods. You then terminate the worker node so that a new worker node is started, running the more recent Kubernetes version you specified. When new worker nodes are started in the existing node pool, they run the more recent Kubernetes version you specified.

Out of Place Upgrade:

You can 'upgrade' the version of Kubernetes running on worker nodes in a node pool by replacing the original node pool with a new node pool that has new worker nodes running the appropriate Kubernetes version. Having drained existing worker nodes in the original node pool to prevent new pods starting and to delete existing pods, you can then delete the original node pool. When new worker nodes are started in the new node pool, they run the more recent Kubernetes version you specified.

The preferred way is out of place upgrade. 

Steps for upgrading worker nodes using out of place upgrade method

Node Pools tab, and then click Add Node Pool to create a new node pool and specify the required Kubernetes version for its worker nodes.

For the first worker node in the original node pool, prevent new pods from starting and delete existing pods by entering: drain node < node name >

Repeat the previous step for each remaining worker node in the node pool, until all the worker nodes have been drained from the original node pool.

On the Cluster page, display the Node Pools tab, and then select Delete Node Pool from the Actions menu beside the original node pool.

If you have a Load Balancer  configured for your worker nodes then you might have to switch the loadbalancer to point to the new IP addresses configured on new worker nodes.

 
7) Can you explain Kubernetes architecture ?

There are mainly two different types of nodes in Kubernetes cluster Master node and worker node. In Master we have etcd cluster, controller, Scheduler and kube-api and on the worker node we have container engine, kubelet and kube-proxy.
ectd cluster is database and it stores the information in the key value format. The controller(s) for that resource are responsible for making the current state come closer to that desired state. The scheduler schedules the work to different worker nodes.It has the resource usage information for each worker node. The Kube API Server is the primary management component of the Kubernetes. This is responsible for orchestrating all operations within the cluster. It exposes the Kubernetes API, which external users use to perform management operations on the cluster and the various controllers to monitor the state of the cluster and make necessary changes as required. We use docker as container engine for kubenetes to run the containers. Kubelet is an agent that runs on each worker node and communicates with the master node. Kube proxy is a proxy service that runs on each node and helps make services available to the external host.