Search This Blog

Saturday, 29 May 2021

Git Manage Remotes

 Task:

The xFusionCorp development team added updates to the project that is maintained under /opt/news.git repo and cloned under /usr/src/kodekloudrepos/news. Recently some changes were made on Git server that is hosted on Storage server in Stratos DC. The DevOps team added some new Git remotes, so we need to update remote on /usr/src/kodekloudrepos/news repository as per details mentioned below:

a. In /usr/src/kodekloudrepos/news repo add a new remote dev_news and point it to /opt/xfusioncorp_news.git repository.
b. There is a file /tmp/index.html on same server; copy this file to the repo and add/commit to master branch.
c. Finally push master branch to this new remote origin.

Step 1) Login to the Storage Server as a root user

thor@jump_host /opt$ ssh natasha@ststor01
The authenticity of host 'ststor01 (172.16.238.15)' can't be established.
ECDSA key fingerprint is SHA256:vJAsZuUSoXH3n5luk4cHC4hGeA8s8cFXoy5mo2CkOCY.
ECDSA key fingerprint is MD5:33:5e:d1:86:a5:91:28:d8:fd:f6:7d:6b:83:7a:82:83.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ststor01,172.16.238.15' (ECDSA) to the list of known hosts.
natasha@ststor01's password: 
[natasha@ststor01 ~]$ sudo su -

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

[sudo] password for natasha: 

Step 2) Validate Repository location

[root@ststor01 ~]# cd /usr/src/kodekloudrepos/news
[root@ststor01 news]# ls -rlt
total 4
-rw-r--r-- 1 root root 34 May 30 03:23 info.txt
[root@ststor01 news]# cd /opt/
[root@ststor01 opt]# ls -rlt
total 8
drwxr-xr-x 7 root root 4096 May 30 03:23 news.git
drwxr-xr-x 7 root root 4096 May 30 03:23 xfusioncorp_news.git


Step 3)  In /usr/src/kodekloudrepos/news repo add a new remote dev_news and point it to /opt/xfusioncorp_news.git repository.

[root@ststor01 news]# git remote add dev_news /opt/xfusioncorp_news.git

Step 4) There is a file /tmp/index.html on same server; copy this file to the repo and add/commit to master branch.

[root@ststor01 news]# cp /tmp/index.html .
[root@ststor01 news]# git add index.html
[root@ststor01 news]# git commit -m "add index.html"
[master ca81dc7] add index.html
 1 file changed, 10 insertions(+)
 create mode 100644 index.html

Step 5)  Finally push master branch to this new remote origin.

[root@ststor01 news]# git push -u dev_news master
Counting objects: 6, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (6/6), 583 bytes | 0 bytes/s, done.
Total 6 (delta 0), reused 0 (delta 0)
To /opt/xfusioncorp_news.git
 * [new branch]      master -> master
Branch master set up to track remote branch master from dev_news.

Wednesday, 26 May 2021

Puppet Setup File Permissions

Task:

The Nautilus DevOps team has put data on all app servers in Stratos DC. jump host is configured as Puppet master server, and all app servers are already been configured as Puppet agent nodes. The team needs to update content of some of the exiting files as well as update its permissions, etc. Please find below more details about the task:

Create a Puppet programming file official.pp under /etc/puppetlabs/code/environments/production/manifests directory on master node i.e Jump Server. Using puppet file resource, perform the below mentioned tasks.
File beta.txt already exists under /opt/finance directory on App Server 3.
Add content Welcome to xFusionCorp Industries! in file beta.txt on App Server 3.
Set permissions 0777 for file beta.txt on App Server 3.
Note: Please perform this task using official.pp only, do not create any separate inventory file.

Step 1) Create a puppet class

root@jump_host /# cd /etc/puppetlabs/code/environments/production/manifests

root@jump_host /etc/puppetlabs/code/environments/production/manifests# vi official.pp

class file_permissions {

  # Update beta.txt under /opt/finance

  file { '/opt/finance/beta.txt':

    ensure => 'present',

    content => 'Welcome to xFusionCorp Industries!',

    mode => '0777',

  }

}

node 'stapp03.stratos.xfusioncorp.com' {

  include file_permissions

}

Step 2) Validate puppet class

root@jump_host /etc/puppetlabs/code/environments/production/manifests# puppet parser validate official.pp 

Step 3) Login to stapp03 as a root

root@jump_host /etc/puppetlabs/code/environments/production/manifests# ssh banner@stapp03
The authenticity of host 'stapp03 (172.16.238.12)' can't be established.
ECDSA key fingerprint is SHA256:E3zIVPZa3MQk87dWVRtHnBQBIjuhkJMs66WRzrrYlNU.
ECDSA key fingerprint is MD5:4c:d5:a8:ee:3a:42:ee:6e:19:a2:c6:ab:63:b4:5f:c4.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'stapp03,172.16.238.12' (ECDSA) to the list of known hosts.
banner@stapp03's password: 

[banner@stapp03 ~]$ sudo su -
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.
[sudo] password for banner: 

Step 4) Run puppet agent -tv on app server 3

[root@stapp03 ~]# puppet agent -tv
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Retrieving locales
Info: Caching catalog for stapp03.stratos.xfusioncorp.com
Info: Applying configuration version '1622067074'
Notice: /Stage[main]/File_permissions/File[/opt/finance/beta.txt]/content: 
--- /opt/finance/beta.txt       2021-05-26 22:04:09.896000000 +0000
+++ /tmp/puppet-file20210526-194-sqzdqw 2021-05-26 22:11:14.572000000 +0000
@@ -0,0 +1 @@
+Welcome to xFusionCorp Industries!
\ No newline at end of file
Info: Computing checksum on file /opt/finance/beta.txt
Info: /Stage[main]/File_permissions/File[/opt/finance/beta.txt]: Filebucketed /opt/finance/beta.txt to puppet with sum d41d8cd98f00b204e9800998ecf8427e
Notice: /Stage[main]/File_permissions/File[/opt/finance/beta.txt]/content: content changed '{md5}d41d8cd98f00b204e9800998ecf8427e' to '{md5}b899e8a90bbb38276f6a00012e1956fe'
Notice: /Stage[main]/File_permissions/File[/opt/finance/beta.txt]/mode: mode changed '0644' to '0777'
Notice: Applied catalog in 0.08 seconds
[root@stapp03 ~]# 

Saturday, 22 May 2021

Docker Copy Operations

Task:

 The Nautilus DevOps team has some conditional data present on App Server 1 in Stratos Datacenter. There is a container ubuntu_latest running on the same server. We received a request to copy some of the data from the docker host to the container. Below are more details about the task. 

On App Server 1 in Stratos Datacenter copy an encrypted file /tmp/nautilus.txt.gpg from docker host to ubuntu_latest container (running on same server) in /tmp/ location. Please do not try to modify this file in any way.

Step 1) Login to the App Server 1 as root user

[tony@stapp01 ~]$ sudo su -

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

[sudo] password for tony: 

Step 2) Verify if the container ubuntu_latest is running

[root@stapp01 ~]# docker ps

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
bcdf6ee77b4e        ubuntu              "/bin/bash"         5 minutes ago       Up 5 minutes                            ubuntu_latest

Step 3) Verify if the file  /tmp/nautilus.txt.gpg  is present at the location 

[root@stapp01 ~]# ls /tmp/nautilus.txt.gpg 
/tmp/nautilus.txt.gpg

Step 4) Copy the file from docker host to container ubuntu_latest

[root@stapp01 ~]# docker cp /tmp/nautilus.txt.gpg bcdf6ee77b4e:/tmp/

Step 5) Validate if the file is present inside the container. 

[root@stapp01 ~]# docker container attach bcdf6ee77b4e
root@bcdf6ee77b4e:/# ls -l /tmp
total 4
-rw-r--r-- 1 root root 74 May 23 02:59 nautilus.txt.gpg

Friday, 21 May 2021

Deploy Nginx Web Server on Kubernetes Cluster

Task:
Some of the Nautilus team developers are developing a static website and they want to deploy it on Kubernetes cluster. They want it to be highly available and scalable. Therefore, based on the requirements, the DevOps team has decided to create deployment for it with multiple replicas. Below you can find more details about it:

Create a deployment using nginx image with latest tag only and remember to mention tag i.e nginx:latest and name it as nginx-deployment. App labels should be app: nginx-app and type: front-end. The container should be named as nginx-container; also make sure replica counts are 3.
Also create a service named nginx-service and type NodePort. The targetPort should be 80 and nodePort should be 30011.
Note: The kubectl utility on jump_host has been configured to work with the kubernetes cluster.

Step 1) Create deploy.yaml file using --dry-run command and modify later as per your requirements.

thor@jump_host ~$ kubectl create deploy nginx-deployment --image=nginx:latest --dry-run=client -o yaml > deploy.yaml

thor@jump_host ~$ cat deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx-deployment
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-deployment
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx-deployment
    spec:
      containers:
      - image: nginx:latest
        name: nginx
        resources: {}
status: {}

thor@jump_host ~$ vi deploy.yaml 

thor@jump_host ~$ cat deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-app
    type: front-end
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-deployment
  template:
    metadata:
      labels:
        app: nginx-deployment
    spec:
      containers:
      - image: nginx:latest
        name: nginx-container

Step 2) Apply the deployment changes

thor@jump_host ~$ kubectl apply -f deploy.yaml 

deployment.apps/nginx-deployment created

Step 3) Create service.yaml file using --dry-run command and modify later as per your requirements.

 thor@jump_host ~$ kubectl expose deploy nginx-deployment --name=nginx-service --type=NodePort --port=30011 --target-port=80 --dry-run=client -o yaml > service.yaml

thor@jump_host ~$ cat service.yaml 
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: nginx-app
    type: front-end
  name: nginx-service
spec:
  ports:
  - port: 30011
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx-deployment
  type: NodePort
status:
  loadBalancer: {}

thor@jump_host ~$ vi service.yaml

thor@jump_host ~$ cat service.yaml 
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-app
    type: front-end
  name: nginx-service
spec:
  ports:
  - nodePort: 30011
    protocol: TCP
    port: 80
    targetPort: 80
  selector:
    app: nginx-deployment
  type: NodePort

Step 4) Apply the Service changes

thor@jump_host ~$ kubectl apply -f service.yaml 

service/nginx-service created

Step 5) Validate the deployment and Service

thor@jump_host ~$ kubectl get deployment -o wide

NAME               READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES         SELECTOR
nginx-deployment   3/3     3            3           97s   nginx        nginx:latest   app=nginx-deployment

thor@jump_host ~$ kubectl get service -o wide

NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)           AGE     SELECTOR
kubernetes      ClusterIP   10.96.0.1       <none>        443/TCP           3h52m   <none>
nginx-service   NodePort    10.103.135.28   <none>        80:30011/TCP   33s     app=nginx-deployment




Wednesday, 19 May 2021

Rolling Updates And Rolling Back Deployments in Kubernetes

Task:
There is a production deployment planned for next week. The Nautilus DevOps team wants to test the deployment update and rollback on Dev environment first so that they can identify the risks in advance. Below you can find more details about the plan they want to execute.

Create a namespace devops. Create a deployment called httpd-deploy under this new namespace, It should have one container called httpd, use httpd:2.4.27 image and 6 replicas. The deployment should use RollingUpdate strategy with maxSurge=1, and maxUnavailable=2.
Next upgrade the deployment to version httpd:2.4.43 using a rolling update.
Finally, once all pods are updated undo the update and roll back to the previous/original version.
Note: The kubectl utility on jump_host has been configured to work with the kubernetes cluster.

Step 1) Create a Name Space

thor@jump_host /$ kubectl create namespace devops

namespace/devops created

Step 2)  Create a deployment called httpd-deploy under this new namespace, It should have one container called httpd, use httpd:2.4.27 image and 6 replicas. The deployment should use RollingUpdate strategy with maxSurge=1, and maxUnavailable=2.

thor@jump_host ~$ cat deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpd-deploy
  namespace: devops
spec:
  replicas: 6
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 2
  selector:
    matchLabels:
      app: devops
  template:
    metadata:
      labels:
        app: devops
    spec:
      containers:
      - image: httpd:2.4.27
        name: httpd

Step 3) Apply the changes

thor@jump_host ~$ kubectl apply -f deploy.yaml 

deployment.apps/httpd-deploy created

Step 4) Validate the deployment version 

thor@jump_host ~$ kubectl get deployments --namespace=devops  -o wide

NAMESPACE     NAME           READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                     SELECTOR
devops        httpd-deploy   6/6     6            6           23s   httpd        httpd:2.4.27               app=devops

Step 5) Check the deployment Revision

kubectl rollout history deployment/httpd-deploy --namespace=devops

deployment.apps/httpd-deploy 
REVISION  CHANGE-CAUSE
1         <none>

Step 6) Upgrade the deployment to version httpd:2.4.43 using a rolling update.

thor@jump_host ~$ kubectl set image deployment/httpd-deploy httpd=httpd:2.4.43 --namespace=devops --record=true

deployment.apps/httpd-deploy image updated

Step 7) Validate the deployment 

thor@jump_host ~$ kubectl get deployments --namespace=devops  -o wide

NAME           READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS   IMAGES         SELECTOR
httpd-deploy   6/6     6            6           114s   httpd        httpd:2.4.43   app=devops

Step 8) Undo the update and roll back to the previous/original version

thor@jump_host ~$ kubectl rollout undo deployment/httpd-deploy --to-revision=1 --namespace=devops

deployment.apps/httpd-deploy rolled back

Step 9) Validate the deployment version 

thor@jump_host ~$ kubectl get deployments --namespace=devops  -o wide

NAME           READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES         SELECTOR
httpd-deploy   6/6     6            6           2m53s   httpd        httpd:2.4.27   app=devops



Wednesday, 12 May 2021

How to write regex for apache ProxyPassMatch to reverse proxy API calls

 Step 1) Shutdown apache 

Step 2) Edit httpd.conf file and add the following modules.

 LoadModule proxy_module modules/mod_proxy.so

LoadModule lbmethod_byrequests_module modules/mod_lbmethod_byrequests.so

LoadModule proxy_balancer_module modules/mod_proxy_balancer.so

LoadModule proxy_http_module modules/mod_proxy_http.so

Step 3) Add the required configs for Reverse proxy on httpd.conf  file

ProxyRequests Off

ProxyPreserveHost On

ProxyPassMatch "/api(.*)" "http://localhost:9000/api$1"

ProxyPassReverse "/api(.*)" "http://localhost:9000/api$1"

Step 4) Startup Apache instance

Tuesday, 11 May 2021

Ansible Replace Module

 Task:

There is data on all app servers in Stratos DC. The Nautilus development team shared some requirement with the DevOps team to alter some of the data as per recent changes. The DevOps team is working to prepare an Ansible playbook to accomplish the same. Below you can find more details about the task.

Create a playbook.yml under /home/thor/ansible on jump host; an inventory is already place under /home/thor/ansible on Jump host itself.

We have a file /opt/security/blog.txt on app server 1. Using Ansible replace module replace string xFusionCorp to Nautilus in that file.

We have a file /opt/security/story.txt on app server 2. Using Ansiblereplace module replace string Nautilus to KodeKloud in that file.

We have a file /opt/security/media.txt on app server 3. Using Ansible replace module replace string KodeKloud to xFusionCorp Industries in that file.

Note: Validation will try to run playbook using command ansible-playbook -i inventory playbook.yml so please make sure playbook works this way, without passing any extra arguments.

Step 1) Verify the inventory file

thor@jump_host ~/ansible$ cat inventory 
stapp01 ansible_host=172.16.238.10 ansible_ssh_pass=Ir0nM@n ansible_user=tony
stapp02 ansible_host=172.16.238.11 ansible_ssh_pass=Am3ric@ ansible_user=steve
stapp03 ansible_host=172.16.238.12 ansible_ssh_pass=BigGr33n ansible_user=banner
thor@jump_host ~/ansible$ 


thor@jump_host ~/ansible$ cat ansible.cfg 
[defaults]
host_key_checking = False
thor@jump_host ~/ansible$ 

Step 2) Create a playbook

thor@jump_host ~/ansible$ cat playbook.yml 
---
- name: create a blank replace
  hosts: all
  become: true
  tasks:

    - name: Replace a String
      replace:
        path: /opt/security/blog.txt
        regexp: 'xFusionCorp'
        replace: "Nautilus"
      when: (ansible_user == "tony")

    - name: Replace a String
      replace:
        path: /opt/security/story.txt
        regexp: 'Nautilus'
        replace: "KodeKloud"
      when: (ansible_user == "steve")

    - name: Replace a String
      replace:
        path: /opt/security/media.txt
        regexp: 'KodeKloud'
        replace: "xFusionCorp Industries"
      when: (ansible_user == "banner")

Step 3) Run the playbook

thor@jump_host ~/ansible$ ansible-playbook -i inventory playbook.yml

PLAY [create a blank replace] ************************************************************************************************

TASK [Gathering Facts] *******************************************************************************************************
ok: [stapp03]
ok: [stapp02]
ok: [stapp01]

TASK [Replace a String] ******************************************************************************************************
skipping: [stapp02]
skipping: [stapp03]
changed: [stapp01]

TASK [Replace a String] ******************************************************************************************************
skipping: [stapp01]
skipping: [stapp03]
changed: [stapp02]

TASK [Replace a String] ******************************************************************************************************
skipping: [stapp01]
skipping: [stapp02]
changed: [stapp03]

PLAY RECAP *******************************************************************************************************************
stapp01                    : ok=2    changed=1    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0   
stapp02                    : ok=2    changed=1    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0   
stapp03                    : ok=2    changed=1    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0   

Sunday, 9 May 2021

Ansible playbook to create a file on remote host and change the permissions

 Task:

The Nautilus DevOps team is working to test several Ansible modules on servers in Stratos DC. Recently they wanted to test file creation on remote hosts using Ansible. More details about the task aregiven below. Please proceed with the same:

a. Create an inventory file ~/playbook/inventory on jump host and add all app servers in it.
b. Create a playbook ~/playbook/playbook.yml to create a blank file /opt/opt.txt on all app servers.
c. The /opt/opt.txt file permission must be 0777.
d. The user/group owner of file /opt/opt.txt must be tony on app server 1, steve on app server 2 and banner on app server 3.
Note: Validation will try to run playbook using command ansible-playbook -i inventory playbook.yml, so please make sure playbook works this way, without passing any extra arguments.

Step 1) Create an Inventory File

thor@jump_host ~/playbook$ cat inventory 
stapp01 ansible_connection=ssh ansible_user=tony
stapp02 ansible_connection=ssh ansible_user=steve
stapp03 ansible_connection=ssh ansible_user=banner

Step 2) Create a playbook

thor@jump_host ~/playbook$ cat playbook.yml 
---
- name: create a blank file
  hosts: all
  become: true
  tasks:

    - name: Create a file
      shell: touch /opt/opt.txt

    - name: Change file ownership, group and permissions to tony
      file:
        path: /opt/opt.txt
        owner: tony
        group: tony
        mode: '0777'
      when: (ansible_user == "tony")

    - name: Change file ownership, group and permissions to steve
      file:
        path: /opt/opt.txt
        owner: steve
        group: steve
        mode: '0777'
      when: (ansible_user == "steve")

    - name: Change file ownership, group and permissions to banner
      file:
        path: /opt/opt.txt
        owner: banner
        group: banner
        mode: '0777'
      when: (ansible_user == "banner")


Step 3) Run the playbook

thor@jump_host ~/playbook$ ansible-playbook -i inventory playbook.yml

PLAY [create a blank file] *********************************************************************************

TASK [Gathering Facts] *************************************************************************************
ok: [stapp01]
ok: [stapp03]
ok: [stapp02]

TASK [Create an ansible file] ******************************************************************************
[WARNING]: Consider using the file module with state=touch rather than running 'touch'.  If you need to use
command because file is insufficient you can add 'warn: false' to this command task or set
'command_warnings=False' in ansible.cfg to get rid of this message.
changed: [stapp01]
changed: [stapp02]
changed: [stapp03]

TASK [Change file ownership, group and permissions for user tony] ******************************************
skipping: [stapp02]
skipping: [stapp03]
changed: [stapp01]

TASK [Change file ownership, group and permissions for user steve] *****************************************
skipping: [stapp01]
skipping: [stapp03]
changed: [stapp02]

TASK [Change file ownership, group and permissions for user banner] ****************************************
skipping: [stapp01]
skipping: [stapp02]
changed: [stapp03]

PLAY RECAP *************************************************************************************************
stapp01                    : ok=3    changed=2    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0   
stapp02                    : ok=3    changed=2    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0   
stapp03                    : ok=3    changed=2    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0   

Step 4) Validate 

thor@jump_host ~/playbook$ ssh tony@stapp01
Last login: Sun May  9 19:07:08 2021 from jump_host.devops-ansible-file_app_net
[tony@stapp01 ~]$ cd /opt/
-rwxrwxrwx 1 tony tony 0 May  9 19:07 opt.txt
[tony@stapp01 opt]$ exit
logout
Connection to stapp01 closed.

thor@jump_host ~/playbook$ ssh steve@stapp02
Last login: Sun May  9 19:07:08 2021 from jump_host.devops-ansible-file_app_net
[steve@stapp02 ~]$ ls -rlt /opt/opt.txt 
-rwxrwxrwx 1 steve steve 0 May  9 19:07 /opt/opt.txt
[steve@stapp02 ~]$ exit
logout
Connection to stapp02 closed.

thor@jump_host ~/playbook$ ssh banner@stapp03
Last login: Sun May  9 19:07:09 2021 from jump_host.devops-ansible-file_app_net
[banner@stapp03 ~]$ ls -lrt /opt/opt.txt 
-rwxrwxrwx 1 banner banner 0 May  9 19:07 /opt/opt.txt