Search This Blog

Saturday, 29 May 2021

Git Manage Remotes

 Task:

The xFusionCorp development team added updates to the project that is maintained under /opt/news.git repo and cloned under /usr/src/kodekloudrepos/news. Recently some changes were made on Git server that is hosted on Storage server in Stratos DC. The DevOps team added some new Git remotes, so we need to update remote on /usr/src/kodekloudrepos/news repository as per details mentioned below:

a. In /usr/src/kodekloudrepos/news repo add a new remote dev_news and point it to /opt/xfusioncorp_news.git repository.
b. There is a file /tmp/index.html on same server; copy this file to the repo and add/commit to master branch.
c. Finally push master branch to this new remote origin.

Step 1) Login to the Storage Server as a root user

thor@jump_host /opt$ ssh natasha@ststor01
The authenticity of host 'ststor01 (172.16.238.15)' can't be established.
ECDSA key fingerprint is SHA256:vJAsZuUSoXH3n5luk4cHC4hGeA8s8cFXoy5mo2CkOCY.
ECDSA key fingerprint is MD5:33:5e:d1:86:a5:91:28:d8:fd:f6:7d:6b:83:7a:82:83.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ststor01,172.16.238.15' (ECDSA) to the list of known hosts.
natasha@ststor01's password: 
[natasha@ststor01 ~]$ sudo su -

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

[sudo] password for natasha: 

Step 2) Validate Repository location

[root@ststor01 ~]# cd /usr/src/kodekloudrepos/news
[root@ststor01 news]# ls -rlt
total 4
-rw-r--r-- 1 root root 34 May 30 03:23 info.txt
[root@ststor01 news]# cd /opt/
[root@ststor01 opt]# ls -rlt
total 8
drwxr-xr-x 7 root root 4096 May 30 03:23 news.git
drwxr-xr-x 7 root root 4096 May 30 03:23 xfusioncorp_news.git


Step 3)  In /usr/src/kodekloudrepos/news repo add a new remote dev_news and point it to /opt/xfusioncorp_news.git repository.

[root@ststor01 news]# git remote add dev_news /opt/xfusioncorp_news.git

Step 4) There is a file /tmp/index.html on same server; copy this file to the repo and add/commit to master branch.

[root@ststor01 news]# cp /tmp/index.html .
[root@ststor01 news]# git add index.html
[root@ststor01 news]# git commit -m "add index.html"
[master ca81dc7] add index.html
 1 file changed, 10 insertions(+)
 create mode 100644 index.html

Step 5)  Finally push master branch to this new remote origin.

[root@ststor01 news]# git push -u dev_news master
Counting objects: 6, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (6/6), 583 bytes | 0 bytes/s, done.
Total 6 (delta 0), reused 0 (delta 0)
To /opt/xfusioncorp_news.git
 * [new branch]      master -> master
Branch master set up to track remote branch master from dev_news.

Wednesday, 26 May 2021

Puppet Setup File Permissions

Task:

The Nautilus DevOps team has put data on all app servers in Stratos DC. jump host is configured as Puppet master server, and all app servers are already been configured as Puppet agent nodes. The team needs to update content of some of the exiting files as well as update its permissions, etc. Please find below more details about the task:

Create a Puppet programming file official.pp under /etc/puppetlabs/code/environments/production/manifests directory on master node i.e Jump Server. Using puppet file resource, perform the below mentioned tasks.
File beta.txt already exists under /opt/finance directory on App Server 3.
Add content Welcome to xFusionCorp Industries! in file beta.txt on App Server 3.
Set permissions 0777 for file beta.txt on App Server 3.
Note: Please perform this task using official.pp only, do not create any separate inventory file.

Step 1) Create a puppet class

root@jump_host /# cd /etc/puppetlabs/code/environments/production/manifests

root@jump_host /etc/puppetlabs/code/environments/production/manifests# vi official.pp

class file_permissions {

  # Update beta.txt under /opt/finance

  file { '/opt/finance/beta.txt':

    ensure => 'present',

    content => 'Welcome to xFusionCorp Industries!',

    mode => '0777',

  }

}

node 'stapp03.stratos.xfusioncorp.com' {

  include file_permissions

}

Step 2) Validate puppet class

root@jump_host /etc/puppetlabs/code/environments/production/manifests# puppet parser validate official.pp 

Step 3) Login to stapp03 as a root

root@jump_host /etc/puppetlabs/code/environments/production/manifests# ssh banner@stapp03
The authenticity of host 'stapp03 (172.16.238.12)' can't be established.
ECDSA key fingerprint is SHA256:E3zIVPZa3MQk87dWVRtHnBQBIjuhkJMs66WRzrrYlNU.
ECDSA key fingerprint is MD5:4c:d5:a8:ee:3a:42:ee:6e:19:a2:c6:ab:63:b4:5f:c4.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'stapp03,172.16.238.12' (ECDSA) to the list of known hosts.
banner@stapp03's password: 

[banner@stapp03 ~]$ sudo su -
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.
[sudo] password for banner: 

Step 4) Run puppet agent -tv on app server 3

[root@stapp03 ~]# puppet agent -tv
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Retrieving locales
Info: Caching catalog for stapp03.stratos.xfusioncorp.com
Info: Applying configuration version '1622067074'
Notice: /Stage[main]/File_permissions/File[/opt/finance/beta.txt]/content: 
--- /opt/finance/beta.txt       2021-05-26 22:04:09.896000000 +0000
+++ /tmp/puppet-file20210526-194-sqzdqw 2021-05-26 22:11:14.572000000 +0000
@@ -0,0 +1 @@
+Welcome to xFusionCorp Industries!
\ No newline at end of file
Info: Computing checksum on file /opt/finance/beta.txt
Info: /Stage[main]/File_permissions/File[/opt/finance/beta.txt]: Filebucketed /opt/finance/beta.txt to puppet with sum d41d8cd98f00b204e9800998ecf8427e
Notice: /Stage[main]/File_permissions/File[/opt/finance/beta.txt]/content: content changed '{md5}d41d8cd98f00b204e9800998ecf8427e' to '{md5}b899e8a90bbb38276f6a00012e1956fe'
Notice: /Stage[main]/File_permissions/File[/opt/finance/beta.txt]/mode: mode changed '0644' to '0777'
Notice: Applied catalog in 0.08 seconds
[root@stapp03 ~]# 

Saturday, 22 May 2021

Docker Copy Operations

Task:

 The Nautilus DevOps team has some conditional data present on App Server 1 in Stratos Datacenter. There is a container ubuntu_latest running on the same server. We received a request to copy some of the data from the docker host to the container. Below are more details about the task. 

On App Server 1 in Stratos Datacenter copy an encrypted file /tmp/nautilus.txt.gpg from docker host to ubuntu_latest container (running on same server) in /tmp/ location. Please do not try to modify this file in any way.

Step 1) Login to the App Server 1 as root user

[tony@stapp01 ~]$ sudo su -

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

[sudo] password for tony: 

Step 2) Verify if the container ubuntu_latest is running

[root@stapp01 ~]# docker ps

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
bcdf6ee77b4e        ubuntu              "/bin/bash"         5 minutes ago       Up 5 minutes                            ubuntu_latest

Step 3) Verify if the file  /tmp/nautilus.txt.gpg  is present at the location 

[root@stapp01 ~]# ls /tmp/nautilus.txt.gpg 
/tmp/nautilus.txt.gpg

Step 4) Copy the file from docker host to container ubuntu_latest

[root@stapp01 ~]# docker cp /tmp/nautilus.txt.gpg bcdf6ee77b4e:/tmp/

Step 5) Validate if the file is present inside the container. 

[root@stapp01 ~]# docker container attach bcdf6ee77b4e
root@bcdf6ee77b4e:/# ls -l /tmp
total 4
-rw-r--r-- 1 root root 74 May 23 02:59 nautilus.txt.gpg

Friday, 21 May 2021

Deploy Nginx Web Server on Kubernetes Cluster

Task:
Some of the Nautilus team developers are developing a static website and they want to deploy it on Kubernetes cluster. They want it to be highly available and scalable. Therefore, based on the requirements, the DevOps team has decided to create deployment for it with multiple replicas. Below you can find more details about it:

Create a deployment using nginx image with latest tag only and remember to mention tag i.e nginx:latest and name it as nginx-deployment. App labels should be app: nginx-app and type: front-end. The container should be named as nginx-container; also make sure replica counts are 3.
Also create a service named nginx-service and type NodePort. The targetPort should be 80 and nodePort should be 30011.
Note: The kubectl utility on jump_host has been configured to work with the kubernetes cluster.

Step 1) Create deploy.yaml file using --dry-run command and modify later as per your requirements.

thor@jump_host ~$ kubectl create deploy nginx-deployment --image=nginx:latest --dry-run=client -o yaml > deploy.yaml

thor@jump_host ~$ cat deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx-deployment
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-deployment
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx-deployment
    spec:
      containers:
      - image: nginx:latest
        name: nginx
        resources: {}
status: {}

thor@jump_host ~$ vi deploy.yaml 

thor@jump_host ~$ cat deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-app
    type: front-end
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-deployment
  template:
    metadata:
      labels:
        app: nginx-deployment
    spec:
      containers:
      - image: nginx:latest
        name: nginx-container

Step 2) Apply the deployment changes

thor@jump_host ~$ kubectl apply -f deploy.yaml 

deployment.apps/nginx-deployment created

Step 3) Create service.yaml file using --dry-run command and modify later as per your requirements.

 thor@jump_host ~$ kubectl expose deploy nginx-deployment --name=nginx-service --type=NodePort --port=30011 --target-port=80 --dry-run=client -o yaml > service.yaml

thor@jump_host ~$ cat service.yaml 
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: nginx-app
    type: front-end
  name: nginx-service
spec:
  ports:
  - port: 30011
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx-deployment
  type: NodePort
status:
  loadBalancer: {}

thor@jump_host ~$ vi service.yaml

thor@jump_host ~$ cat service.yaml 
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-app
    type: front-end
  name: nginx-service
spec:
  ports:
  - nodePort: 30011
    protocol: TCP
    port: 80
    targetPort: 80
  selector:
    app: nginx-deployment
  type: NodePort

Step 4) Apply the Service changes

thor@jump_host ~$ kubectl apply -f service.yaml 

service/nginx-service created

Step 5) Validate the deployment and Service

thor@jump_host ~$ kubectl get deployment -o wide

NAME               READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES         SELECTOR
nginx-deployment   3/3     3            3           97s   nginx        nginx:latest   app=nginx-deployment

thor@jump_host ~$ kubectl get service -o wide

NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)           AGE     SELECTOR
kubernetes      ClusterIP   10.96.0.1       <none>        443/TCP           3h52m   <none>
nginx-service   NodePort    10.103.135.28   <none>        80:30011/TCP   33s     app=nginx-deployment




Wednesday, 19 May 2021

Rolling Updates And Rolling Back Deployments in Kubernetes

Task:
There is a production deployment planned for next week. The Nautilus DevOps team wants to test the deployment update and rollback on Dev environment first so that they can identify the risks in advance. Below you can find more details about the plan they want to execute.

Create a namespace devops. Create a deployment called httpd-deploy under this new namespace, It should have one container called httpd, use httpd:2.4.27 image and 6 replicas. The deployment should use RollingUpdate strategy with maxSurge=1, and maxUnavailable=2.
Next upgrade the deployment to version httpd:2.4.43 using a rolling update.
Finally, once all pods are updated undo the update and roll back to the previous/original version.
Note: The kubectl utility on jump_host has been configured to work with the kubernetes cluster.

Step 1) Create a Name Space

thor@jump_host /$ kubectl create namespace devops

namespace/devops created

Step 2)  Create a deployment called httpd-deploy under this new namespace, It should have one container called httpd, use httpd:2.4.27 image and 6 replicas. The deployment should use RollingUpdate strategy with maxSurge=1, and maxUnavailable=2.

thor@jump_host ~$ cat deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpd-deploy
  namespace: devops
spec:
  replicas: 6
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 2
  selector:
    matchLabels:
      app: devops
  template:
    metadata:
      labels:
        app: devops
    spec:
      containers:
      - image: httpd:2.4.27
        name: httpd

Step 3) Apply the changes

thor@jump_host ~$ kubectl apply -f deploy.yaml 

deployment.apps/httpd-deploy created

Step 4) Validate the deployment version 

thor@jump_host ~$ kubectl get deployments --namespace=devops  -o wide

NAMESPACE     NAME           READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                     SELECTOR
devops        httpd-deploy   6/6     6            6           23s   httpd        httpd:2.4.27               app=devops

Step 5) Check the deployment Revision

kubectl rollout history deployment/httpd-deploy --namespace=devops

deployment.apps/httpd-deploy 
REVISION  CHANGE-CAUSE
1         <none>

Step 6) Upgrade the deployment to version httpd:2.4.43 using a rolling update.

thor@jump_host ~$ kubectl set image deployment/httpd-deploy httpd=httpd:2.4.43 --namespace=devops --record=true

deployment.apps/httpd-deploy image updated

Step 7) Validate the deployment 

thor@jump_host ~$ kubectl get deployments --namespace=devops  -o wide

NAME           READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS   IMAGES         SELECTOR
httpd-deploy   6/6     6            6           114s   httpd        httpd:2.4.43   app=devops

Step 8) Undo the update and roll back to the previous/original version

thor@jump_host ~$ kubectl rollout undo deployment/httpd-deploy --to-revision=1 --namespace=devops

deployment.apps/httpd-deploy rolled back

Step 9) Validate the deployment version 

thor@jump_host ~$ kubectl get deployments --namespace=devops  -o wide

NAME           READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES         SELECTOR
httpd-deploy   6/6     6            6           2m53s   httpd        httpd:2.4.27   app=devops



Wednesday, 12 May 2021

How to write regex for apache ProxyPassMatch to reverse proxy API calls

 Step 1) Shutdown apache 

Step 2) Edit httpd.conf file and add the following modules.

 LoadModule proxy_module modules/mod_proxy.so

LoadModule lbmethod_byrequests_module modules/mod_lbmethod_byrequests.so

LoadModule proxy_balancer_module modules/mod_proxy_balancer.so

LoadModule proxy_http_module modules/mod_proxy_http.so

Step 3) Add the required configs for Reverse proxy on httpd.conf  file

ProxyRequests Off

ProxyPreserveHost On

ProxyPassMatch "/api(.*)" "http://localhost:9000/api$1"

ProxyPassReverse "/api(.*)" "http://localhost:9000/api$1"

Step 4) Startup Apache instance

Tuesday, 11 May 2021

Ansible Replace Module

 Task:

There is data on all app servers in Stratos DC. The Nautilus development team shared some requirement with the DevOps team to alter some of the data as per recent changes. The DevOps team is working to prepare an Ansible playbook to accomplish the same. Below you can find more details about the task.

Create a playbook.yml under /home/thor/ansible on jump host; an inventory is already place under /home/thor/ansible on Jump host itself.

We have a file /opt/security/blog.txt on app server 1. Using Ansible replace module replace string xFusionCorp to Nautilus in that file.

We have a file /opt/security/story.txt on app server 2. Using Ansiblereplace module replace string Nautilus to KodeKloud in that file.

We have a file /opt/security/media.txt on app server 3. Using Ansible replace module replace string KodeKloud to xFusionCorp Industries in that file.

Note: Validation will try to run playbook using command ansible-playbook -i inventory playbook.yml so please make sure playbook works this way, without passing any extra arguments.

Step 1) Verify the inventory file

thor@jump_host ~/ansible$ cat inventory 
stapp01 ansible_host=172.16.238.10 ansible_ssh_pass=Ir0nM@n ansible_user=tony
stapp02 ansible_host=172.16.238.11 ansible_ssh_pass=Am3ric@ ansible_user=steve
stapp03 ansible_host=172.16.238.12 ansible_ssh_pass=BigGr33n ansible_user=banner
thor@jump_host ~/ansible$ 


thor@jump_host ~/ansible$ cat ansible.cfg 
[defaults]
host_key_checking = False
thor@jump_host ~/ansible$ 

Step 2) Create a playbook

thor@jump_host ~/ansible$ cat playbook.yml 
---
- name: create a blank replace
  hosts: all
  become: true
  tasks:

    - name: Replace a String
      replace:
        path: /opt/security/blog.txt
        regexp: 'xFusionCorp'
        replace: "Nautilus"
      when: (ansible_user == "tony")

    - name: Replace a String
      replace:
        path: /opt/security/story.txt
        regexp: 'Nautilus'
        replace: "KodeKloud"
      when: (ansible_user == "steve")

    - name: Replace a String
      replace:
        path: /opt/security/media.txt
        regexp: 'KodeKloud'
        replace: "xFusionCorp Industries"
      when: (ansible_user == "banner")

Step 3) Run the playbook

thor@jump_host ~/ansible$ ansible-playbook -i inventory playbook.yml

PLAY [create a blank replace] ************************************************************************************************

TASK [Gathering Facts] *******************************************************************************************************
ok: [stapp03]
ok: [stapp02]
ok: [stapp01]

TASK [Replace a String] ******************************************************************************************************
skipping: [stapp02]
skipping: [stapp03]
changed: [stapp01]

TASK [Replace a String] ******************************************************************************************************
skipping: [stapp01]
skipping: [stapp03]
changed: [stapp02]

TASK [Replace a String] ******************************************************************************************************
skipping: [stapp01]
skipping: [stapp02]
changed: [stapp03]

PLAY RECAP *******************************************************************************************************************
stapp01                    : ok=2    changed=1    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0   
stapp02                    : ok=2    changed=1    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0   
stapp03                    : ok=2    changed=1    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0   

Sunday, 9 May 2021

Ansible playbook to create a file on remote host and change the permissions

 Task:

The Nautilus DevOps team is working to test several Ansible modules on servers in Stratos DC. Recently they wanted to test file creation on remote hosts using Ansible. More details about the task aregiven below. Please proceed with the same:

a. Create an inventory file ~/playbook/inventory on jump host and add all app servers in it.
b. Create a playbook ~/playbook/playbook.yml to create a blank file /opt/opt.txt on all app servers.
c. The /opt/opt.txt file permission must be 0777.
d. The user/group owner of file /opt/opt.txt must be tony on app server 1, steve on app server 2 and banner on app server 3.
Note: Validation will try to run playbook using command ansible-playbook -i inventory playbook.yml, so please make sure playbook works this way, without passing any extra arguments.

Step 1) Create an Inventory File

thor@jump_host ~/playbook$ cat inventory 
stapp01 ansible_connection=ssh ansible_user=tony
stapp02 ansible_connection=ssh ansible_user=steve
stapp03 ansible_connection=ssh ansible_user=banner

Step 2) Create a playbook

thor@jump_host ~/playbook$ cat playbook.yml 
---
- name: create a blank file
  hosts: all
  become: true
  tasks:

    - name: Create a file
      shell: touch /opt/opt.txt

    - name: Change file ownership, group and permissions to tony
      file:
        path: /opt/opt.txt
        owner: tony
        group: tony
        mode: '0777'
      when: (ansible_user == "tony")

    - name: Change file ownership, group and permissions to steve
      file:
        path: /opt/opt.txt
        owner: steve
        group: steve
        mode: '0777'
      when: (ansible_user == "steve")

    - name: Change file ownership, group and permissions to banner
      file:
        path: /opt/opt.txt
        owner: banner
        group: banner
        mode: '0777'
      when: (ansible_user == "banner")


Step 3) Run the playbook

thor@jump_host ~/playbook$ ansible-playbook -i inventory playbook.yml

PLAY [create a blank file] *********************************************************************************

TASK [Gathering Facts] *************************************************************************************
ok: [stapp01]
ok: [stapp03]
ok: [stapp02]

TASK [Create an ansible file] ******************************************************************************
[WARNING]: Consider using the file module with state=touch rather than running 'touch'.  If you need to use
command because file is insufficient you can add 'warn: false' to this command task or set
'command_warnings=False' in ansible.cfg to get rid of this message.
changed: [stapp01]
changed: [stapp02]
changed: [stapp03]

TASK [Change file ownership, group and permissions for user tony] ******************************************
skipping: [stapp02]
skipping: [stapp03]
changed: [stapp01]

TASK [Change file ownership, group and permissions for user steve] *****************************************
skipping: [stapp01]
skipping: [stapp03]
changed: [stapp02]

TASK [Change file ownership, group and permissions for user banner] ****************************************
skipping: [stapp01]
skipping: [stapp02]
changed: [stapp03]

PLAY RECAP *************************************************************************************************
stapp01                    : ok=3    changed=2    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0   
stapp02                    : ok=3    changed=2    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0   
stapp03                    : ok=3    changed=2    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0   

Step 4) Validate 

thor@jump_host ~/playbook$ ssh tony@stapp01
Last login: Sun May  9 19:07:08 2021 from jump_host.devops-ansible-file_app_net
[tony@stapp01 ~]$ cd /opt/
-rwxrwxrwx 1 tony tony 0 May  9 19:07 opt.txt
[tony@stapp01 opt]$ exit
logout
Connection to stapp01 closed.

thor@jump_host ~/playbook$ ssh steve@stapp02
Last login: Sun May  9 19:07:08 2021 from jump_host.devops-ansible-file_app_net
[steve@stapp02 ~]$ ls -rlt /opt/opt.txt 
-rwxrwxrwx 1 steve steve 0 May  9 19:07 /opt/opt.txt
[steve@stapp02 ~]$ exit
logout
Connection to stapp02 closed.

thor@jump_host ~/playbook$ ssh banner@stapp03
Last login: Sun May  9 19:07:09 2021 from jump_host.devops-ansible-file_app_net
[banner@stapp03 ~]$ ls -lrt /opt/opt.txt 
-rwxrwxrwx 1 banner banner 0 May  9 19:07 /opt/opt.txt




Monday, 19 April 2021

Run Nginx as a Docker Container

Task:

Nautilus DevOps team is testing some applications deployment on some of the application servers. They need to deploy a nginx container on Application Server 3. Please complete the task as per details given below:

On Application Server 3 create a container named nginx_3 using image nginx with alpine tag and make sure container is in running state.

Solution:


thor@jump_host /$ ssh banner@stapp03
The authenticity of host 'stapp03 (172.16.238.12)' can't be established.
ECDSA key fingerprint is SHA256:AJF2x1pj8Ms5Xff85dXW3eULtBP32HV5LdA0H98Uqms.
ECDSA key fingerprint is MD5:ab:7d:52:1c:6b:49:cd:ee:4b:35:e8:43:2f:1b:93:2e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'stapp03,172.16.238.12' (ECDSA) to the list of known hosts.
banner@stapp03's password: 
[banner@stapp03 ~]$ sudo su -

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

[sudo] password for banner: 
[root@stapp03 ~]# pwd
/root
[root@stapp03 ~]# docker run -it --name nginx_3 -p 80:80 -d nginx:alpine
Unable to find image 'nginx:alpine' locally
alpine: Pulling from library/nginx
540db60ca938: Pull complete 
197dc8475a23: Pull complete 
39ea657007e5: Pull complete 
37afbf7d4c3d: Pull complete 
0c01f42c3df7: Pull complete 
d590d87c9181: Pull complete 
Digest: sha256:07ab71a2c8e4ecb19a5a5abcfb3a4f175946c001c8af288b1aa766d67b0d05d2
Status: Downloaded newer image for nginx:alpine
4c273c6a7f1b12e58dd2cb5e105172d53cc19494c20cb7a368d14089bd021af8

[root@stapp03 ~]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES
4c273c6a7f1b        nginx:alpine        "/docker-entrypoint.…"   11 seconds ago      Up 10 seconds       0.0.0.0:80->80/tcp   nginx_3
[root@stapp03 ~]# 

Friday, 16 April 2021

Create Replicaset in Kubernetes Cluster

Task: 

The Nautilus DevOps team is going to deploy some applications on kubernetes cluster as they are planning to migrate some of their applications there. Recently one of the team members has been assigned a task to write a template as per details mentioned below:


Create a ReplicaSet using nginx image with latest tag only and remember to mention tag i.e nginx:latest and name it as nginx-replicaset.

Labels app should be nginx_app, labels type should be front-end. The container should be named as nginx-container; also make sure replicas counts are 4.

Note: The kubectl utility on jump_host has been configured to work with the kubernetes cluster.

Solution:

Step 1) Create a yaml file

thor@jump_host /$ cat /tmp/rs.yaml 
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: nginx-replicaset
  labels:
    type: front-end
    app: nginx_app

spec:
  replicas: 4
  selector:
    matchLabels:
      type: front-end
      app: nginx_app
  template:
    metadata:
      labels:
        type: front-end
        app: nginx_app
    spec:
      containers:
      - name: nginx-container
        image: nginx:latest
thor@jump_host /$ 

Step 2) Execute the yaml file to create a ReplicaSet

thor@jump_host /$ kubectl create -f /tmp/rs.yaml 
replicaset.apps/nginx-replicaset created


Monday, 12 April 2021

Build a ubuntu image and run apache instance using Dockerfile

 As per recent requirements shared by the Nautilus application development team, they need custom images created for one of their projects. Several of the initial testing requirements are already been shared with DevOps team. Therefore, create a docker file /opt/docker/Dockerfile (please keep D capital of Dockerfile) on App server 3 in Stratos DC and configure to build an image with the following requirements:

a. Use ubuntu as the base image.

b. Install apache2 and configure it to work on 8086 port. (do not update any other Apache configuration settings like document root etc).

Step 1) Create a Dockerfile

[root@stapp03 docker]# cd /opt/docker

[root@stapp03 docker]# cat Dockerfile 

FROM ubuntu
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update -y
RUN apt-get install apache2 -y 
RUN apt-get install apache2-utils -y 
RUN apt-get clean 
RUN sed -i 's/80/8086/g' /etc/apache2/ports.conf
EXPOSE 8086 
ENTRYPOINT ["/usr/sbin/apache2ctl"]
CMD ["-D","FOREGROUND","-k", "start"]

Step 2) Build the docker file

[root@stapp03 docker]# docker build . 

Saturday, 10 April 2021

Build an Apache Instance using a Dockerfile

Step 1) Create a Dockerfile

 [root@stapp03 docker]# ls -rtl
total 12
drwxr-xr-x 2 root root 4096 Apr 11 02:53 html
drwxr-xr-x 2 root root 4096 Apr 11 02:53 certs
-rw-r--r-- 1 root root  518 Apr 11 02:53 Dockerfile

[root@stapp03 docker]# cat Dockerfile 
FROM httpd:2.4.43

RUN sed -i "s/Listen 80/Listen 8080/g" /usr/local/apache2/conf/httpd.conf

RUN sed -i '/LoadModule\ ssl_module modules\/mod_ssl.so/s/^#//g' conf/httpd.conf

RUN sed -i '/LoadModule\ socache_shmcb_module modules\/mod_socache_shmcb.so/s/^#//g' conf/httpd.conf

RUN sed -i '/Include\ conf\/extra\/httpd-ssl.conf/s/^#//g' conf/httpd.conf

COPY certs/server.crt /usr/local/apache2/conf/server.crt

COPY certs/server.key /usr/local/apache2/conf/server.key

COPY html/index.html /usr/local/apache2/htdocs/


Step 2) Build the docker file.

[root@stapp03 docker]# docker build . 
Sending build context to Docker daemon  9.216kB
Step 1/8 : FROM httpd:2.4.43
2.4.43: Pulling from library/httpd
bf5952930446: Pull complete 
3d3fecf6569b: Pull complete 
b5fc3125d912: Pull complete 
3c61041685c0: Pull complete 
34b7e9053f76: Pull complete 
Digest: sha256:cd88fee4eab37f0d8cd04b06ef97285ca981c27b4d685f0321e65c5d4fd49357
Status: Downloaded newer image for httpd:2.4.43
 ---> f1455599cc2e
Step 2/8 : RUN sed -i "s/Listen 80/Listen 8080/g" /usr/local/apache2/conf/httpd.conf
 ---> Running in 8a15fb135929
Removing intermediate container 8a15fb135929
 ---> 68a60d63d706
Step 3/8 : RUN sed -i '/LoadModule\ ssl_module modules\/mod_ssl.so/s/^#//g' conf/httpd.conf
 ---> Running in 4428dc0a8f75
Removing intermediate container 4428dc0a8f75
 ---> 120807ee4b24
Step 4/8 : RUN sed -i '/LoadModule\ socache_shmcb_module modules\/mod_socache_shmcb.so/s/^#//g' conf/httpd.conf
 ---> Running in 16a20cd89c06
Removing intermediate container 16a20cd89c06
 ---> adccf5d023cc
Step 5/8 : RUN sed -i '/Include\ conf\/extra\/httpd-ssl.conf/s/^#//g' conf/httpd.conf
 ---> Running in 20517b5c3139
Removing intermediate container 20517b5c3139
 ---> eb9a29e245f6
Step 6/8 : COPY certs/server.crt /usr/local/apache2/conf/server.crt
 ---> 778c38d0b9e8
Step 7/8 : COPY certs/server.key /usr/local/apache2/conf/server.key
 ---> a296ecb733bf
Step 8/8 : COPY html/index.html /usr/local/apache2/htdocs/
 ---> f8770710ddf9
Successfully built f8770710ddf9

Step 3) Verify if the image is running. 

[root@stapp03 docker]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
<none>              <none>              f8770710ddf9        6 seconds ago       166MB
httpd               2.4.43              f1455599cc2e        8 months ago        166MB

Wednesday, 17 March 2021

Ansible playbook loop with condition example

[osboxes@master ansible-playbooks]$ cat loopwithcondition.yml
- hosts: all
  tasks:
  - name: Ansible loop with conditional example
    debug:
      msg: "{{ item }}"
    loop:
      - "hello1"
      - "hello2"
      - "hello3"
    when: ansible_distribution == "CentOS"
[osboxes@master ansible-playbooks]$ ansible-playbook loopwithcondition.yml -i inventory.txt

PLAY [all] **************************************************************************************************************************************************

TASK [Gathering Facts] **************************************************************************************************************************************
ok: [192.168.1.182]

TASK [Ansible loop with conditional example] ****************************************************************************************************************
ok: [192.168.1.182] => (item=hello1) => {
    "msg": "hello1"
}
ok: [192.168.1.182] => (item=hello2) => {
    "msg": "hello2"
}
ok: [192.168.1.182] => (item=hello3) => {
    "msg": "hello3"
}

PLAY RECAP **************************************************************************************************************************************************
192.168.1.182              : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Tuesday, 16 March 2021

Ansible when condition with "in" keyword

Task: Check the disk space of the file system When the operating system is "RedHat" or  "CentOS" or "Debian" using "when" condition and "in" keyword

[osboxes@master ansible-playbooks]$ cat conditions-in.yml
---
- name: check the disk space on the file system
  hosts: all
  vars:
    distributions:
      - RedHat
      - CentOS
      - Debian
  tasks:
    - name: Check the disk space on the each server
      shell: df -h
      register: result
    - debug:
        var: result.stdout_lines
      when: ansible_distribution in distributions


[osboxes@master ansible-playbooks]$ ansible-playbook conditions-in.yml -i inventory.txt

PLAY [check the disk space on the file system] **************************************************************************************************************

TASK [Gathering Facts] **************************************************************************************************************************************
ok: [192.168.1.182]

TASK [Check the disk space on the each server] **************************************************************************************************************
changed: [192.168.1.182]

TASK [debug] ************************************************************************************************************************************************
ok: [192.168.1.182] => {
    "result.stdout_lines": [
        "Filesystem      Size  Used Avail Use% Mounted on",
        "devtmpfs        887M     0  887M   0% /dev",
        "tmpfs           914M     0  914M   0% /dev/shm",
        "tmpfs           914M  9.2M  905M   1% /run",
        "tmpfs           914M     0  914M   0% /sys/fs/cgroup",
        "/dev/sda2       236G  6.9G  230G   3% /",
        "/dev/sda5       254G  1.9G  252G   1% /home",
        "/dev/sda1       976M  188M  722M  21% /boot",
        "tmpfs           183M  1.2M  182M   1% /run/user/42",
        "tmpfs           183M  4.0K  183M   1% /run/user/1000"
    ]
}

PLAY RECAP **************************************************************************************************************************************************
192.168.1.182              : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Monday, 15 March 2021

Deploy an App on Docker Containers using docker-compose

Task:

The Nautilus Application development team recently finished development of one of the apps that they want to deploy on a containerized platform. The Nautilus Application development and DevOps teams met to discuss some of the basic pre-requisites and requirements to complete the deployment. The team wants to test the deployment on one of the app servers before going live and set up a complete containerized stack using a docker compose fie. Below are the details of the task:

On App Server 3 in Stratos Datacenter create a docker compose file /opt/finance/docker-compose.yml (should be named exactly).

The compose should deploy two services (web and DB), and each service should deploy a container as per details below:

For web service:

a. Container name must be php_host.

b. Use image php with any apache tag. Check here for more details https://hub.docker.com/_/php?tab=tags.

c. Map php_host container's port 80 with host port 5001

d. Map php_host container's /var/www/html volume with host volume /var/www/html.

For DB service:

a. Container name must be mysql_host.

b. Use image mariadb with any tag (preferably latest). Check here for more details https://hub.docker.com/_/mariadb?tab=tags.

c. Map mysql_host container's port 3306 with host port 3306

d. Map mysql_host container's /var/lib/mysql volume with host volume /var/lib/mysql.

e. Set MYSQL_DATABASE=database_host and use any custom user ( except root ) with some complex password for DB connections.

After running docker-compose up you can access the app with curl command curl <server-ip or hostname>:5001/
For more details check here: https://hub.docker.com/_/mariadb?tab=description

Step 1) Login to Application server 3 and switch to root user.

thor@jump_host /$ ssh banner@stapp03
banner@stapp03's password: 
[banner@stapp03 ~]$ sudo su -

Step 2) Create a docker-compose.yml file with the specifications given in the task.

[root@stapp03 ~]# cd /opt/finance/
[root@stapp03 finance]# ls -rlt
total 0
[root@stapp03 finance]# vi docker-compose.yml

  web:
     container_name: php_host
     image: php:apache
     ports:
       - 5001:80
     volumes:
       - /var/www/html:/var/www/html     
  db:
    container_name: mysql_host
    image: mariadb:latest
    ports:
    - 3306:3306
    volumes:
      - /var/lib/mysql:/var/lib/mysql
    environment:
      MYSQL_DATABASE: database_host
      MYSQL_ROOT_PASSWORD: abc#123
      MYSQL_USER: pavan
      MYSQL_PASSWORD: Wordpress#134

Step 3) Run the following command

[root@stapp03 finance]# docker-compose up
Pulling web (php:apache)...
apache: Pulling from library/php
6f28985ad184: Pull complete
db883aae18bc: Pull complete
ffae70ea03a9: Pull complete
1e8027612378: Pull complete
3ec32e53dce5: Pull complete
3bb74037bf77: Pull complete
feda0fbd85b1: Pull complete
08cbfcace66f: Pull complete
90e59842632d: Pull complete
5a29d8ab032c: Pull complete
5435aacb3255: Pull complete
57f9bba5897a: Pull complete
7c89fe480eda: Pull complete
Pulling db (mariadb:latest)...
latest: Pulling from library/mariadb
5d3b2c2d21bb: Pull complete
3fc2062ea667: Pull complete
75adf526d75b: Pull complete
62aa2722e098: Pull complete
756d25563a9f: Pull complete
2022ea4dab77: Pull complete
0ab4098b0f7c: Pull complete
d03413915fdf: Pull complete
fb8e671b1408: Pull complete
5e2452f3fb5c: Pull complete
c2da3a6fe532: Pull complete
153b7df268e7: Pull complete
Creating php_host   ... done
Creating mysql_host ... done
Attaching to mysql_host, php_host
mysql_host | 2021-03-16 01:43:25+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 1:10.5.9+maria~focal started.
mysql_host | 2021-03-16 01:43:25+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
mysql_host | 2021-03-16 01:43:25+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 1:10.5.9+maria~focal started.
mysql_host | 2021-03-16 01:43:25+00:00 [Note] [Entrypoint]: Initializing database files
php_host | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.3. Set the 'ServerName' directive globally to suppress this message
php_host | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.3. Set the 'ServerName' directive globally to suppress this message
php_host | [Tue Mar 16 01:43:28.007827 2021] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.38 (Debian) PHP/8.0.3 configured -- resuming normal operations
php_host | [Tue Mar 16 01:43:28.007869 2021] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
mysql_host | 
mysql_host | 
mysql_host | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER !
mysql_host | To do so, start the server, then issue the following commands:
mysql_host | 
mysql_host | '/usr/bin/mysqladmin' -u root password 'new-password'
mysql_host | '/usr/bin/mysqladmin' -u root -h  password 'new-password'
mysql_host | 
mysql_host | Alternatively you can run:
mysql_host | '/usr/bin/mysql_secure_installation'
mysql_host | 
mysql_host | which will also give you the option of removing the test
mysql_host | databases and anonymous user created by default.  This is
mysql_host | strongly recommended for production servers.
mysql_host | 
mysql_host | See the MariaDB Knowledgebase at https://mariadb.com/kb or the
mysql_host | MySQL manual for more instructions.
mysql_host | 
mysql_host | Please report any problems at https://mariadb.org/jira
mysql_host | 
mysql_host | The latest information about MariaDB is available at https://mariadb.org/.
mysql_host | You can find additional information about the MySQL part at:
mysql_host | https://dev.mysql.com
mysql_host | Consider joining MariaDB's strong and vibrant community:
mysql_host | https://mariadb.org/get-involved/
mysql_host | 
mysql_host | 2021-03-16 01:43:28+00:00 [Note] [Entrypoint]: Database files initialized
mysql_host | 2021-03-16 01:43:28+00:00 [Note] [Entrypoint]: Starting temporary server
mysql_host | 2021-03-16 01:43:28+00:00 [Note] [Entrypoint]: Waiting for server startup
mysql_host | 2021-03-16  1:43:28 0 [Note] mysqld (mysqld 10.5.9-MariaDB-1:10.5.9+maria~focal) starting as process 96 ...
mysql_host | 2021-03-16  1:43:28 0 [Note] InnoDB: Uses event mutexes
mysql_host | 2021-03-16  1:43:28 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
mysql_host | 2021-03-16  1:43:28 0 [Note] InnoDB: Number of pools: 1
mysql_host | 2021-03-16  1:43:28 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions
mysql_host | 2021-03-16  1:43:28 0 [Note] mysqld: O_TMPFILE is not supported on /tmp (disabling future attempts)
mysql_host | 2021-03-16  1:43:28 0 [Note] InnoDB: Using Linux native AIO
mysql_host | 2021-03-16  1:43:28 0 [Note] InnoDB: Initializing buffer pool, total size = 134217728, chunk size = 134217728
mysql_host | 2021-03-16  1:43:28 0 [Note] InnoDB: Completed initialization of buffer pool
mysql_host | 2021-03-16  1:43:28 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
mysql_host | 2021-03-16  1:43:28 0 [Note] InnoDB: 128 rollback segments are active.
mysql_host | 2021-03-16  1:43:28 0 [Note] InnoDB: Creating shared tablespace for temporary tables
mysql_host | 2021-03-16  1:43:28 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
mysql_host | 2021-03-16  1:43:28 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
mysql_host | 2021-03-16  1:43:28 0 [Note] InnoDB: 10.5.9 started; log sequence number 45130; transaction id 20
mysql_host | 2021-03-16  1:43:28 0 [Note] Plugin 'FEEDBACK' is disabled.
mysql_host | 2021-03-16  1:43:28 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
mysql_host | 2021-03-16  1:43:28 0 [Note] InnoDB: Buffer pool(s) load completed at 210316  1:43:28
mysql_host | 2021-03-16  1:43:28 0 [Warning] 'user' entry 'root@35db7c5d0517' ignored in --skip-name-resolve mode.
mysql_host | 2021-03-16  1:43:28 0 [Warning] 'proxies_priv' entry '@% root@35db7c5d0517' ignored in --skip-name-resolve mode.
mysql_host | 2021-03-16  1:43:28 0 [Note] Reading of all Master_info entries succeeded
mysql_host | 2021-03-16  1:43:28 0 [Note] Added new Master_info '' to hash table
mysql_host | 2021-03-16  1:43:28 0 [Note] mysqld: ready for connections.
mysql_host | Version: '10.5.9-MariaDB-1:10.5.9+maria~focal'  socket: '/run/mysqld/mysqld.sock'  port: 0  mariadb.org binary distribution
mysql_host | 2021-03-16 01:43:29+00:00 [Note] [Entrypoint]: Temporary server started.
mysql_host | Warning: Unable to load '/usr/share/zoneinfo/leap-seconds.list' as time zone. Skipping it.
mysql_host | Warning: Unable to load '/usr/share/zoneinfo/leapseconds' as time zone. Skipping it.
mysql_host | Warning: Unable to load '/usr/share/zoneinfo/tzdata.zi' as time zone. Skipping it.
mysql_host | 2021-03-16  1:43:31 5 [Warning] 'proxies_priv' entry '@% root@35db7c5d0517' ignored in --skip-name-resolve mode.
mysql_host | 2021-03-16 01:43:31+00:00 [Note] [Entrypoint]: Creating database database_host
mysql_host | 2021-03-16 01:43:31+00:00 [Note] [Entrypoint]: Creating user pavan
mysql_host | 2021-03-16 01:43:31+00:00 [Note] [Entrypoint]: Giving user pavan access to schema database_host
mysql_host | 
mysql_host | 2021-03-16 01:43:31+00:00 [Note] [Entrypoint]: Stopping temporary server
mysql_host | 2021-03-16  1:43:31 0 [Note] mysqld (initiated by: root[root] @ localhost []): Normal shutdown
mysql_host | 2021-03-16  1:43:31 0 [Note] Event Scheduler: Purging the queue. 0 events
mysql_host | 2021-03-16  1:43:31 0 [Note] InnoDB: FTS optimize thread exiting.
mysql_host | 2021-03-16  1:43:31 0 [Note] InnoDB: Starting shutdown...
mysql_host | 2021-03-16  1:43:31 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool
mysql_host | 2021-03-16  1:43:31 0 [Note] InnoDB: Buffer pool(s) dump completed at 210316  1:43:31
mysql_host | 2021-03-16  1:43:32 0 [Note] InnoDB: Removed temporary tablespace data file: "ibtmp1"
mysql_host | 2021-03-16  1:43:32 0 [Note] InnoDB: Shutdown completed; log sequence number 45142; transaction id 21
mysql_host | 2021-03-16  1:43:32 0 [Note] mysqld: Shutdown complete
mysql_host | 
mysql_host | 2021-03-16 01:43:32+00:00 [Note] [Entrypoint]: Temporary server stopped
mysql_host | 
mysql_host | 2021-03-16 01:43:32+00:00 [Note] [Entrypoint]: MySQL init process done. Ready for start up.
mysql_host | 
mysql_host | 2021-03-16  1:43:32 0 [Note] mysqld (mysqld 10.5.9-MariaDB-1:10.5.9+maria~focal) starting as process 1 ...
mysql_host | 2021-03-16  1:43:32 0 [Note] InnoDB: Uses event mutexes
mysql_host | 2021-03-16  1:43:32 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
mysql_host | 2021-03-16  1:43:32 0 [Note] InnoDB: Number of pools: 1
mysql_host | 2021-03-16  1:43:32 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions
mysql_host | 2021-03-16  1:43:32 0 [Note] mysqld: O_TMPFILE is not supported on /tmp (disabling future attempts)
mysql_host | 2021-03-16  1:43:32 0 [Note] InnoDB: Using Linux native AIO
mysql_host | 2021-03-16  1:43:32 0 [Note] InnoDB: Initializing buffer pool, total size = 134217728, chunk size = 134217728
mysql_host | 2021-03-16  1:43:32 0 [Note] InnoDB: Completed initialization of buffer pool
mysql_host | 2021-03-16  1:43:32 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
mysql_host | 2021-03-16  1:43:32 0 [Note] InnoDB: 128 rollback segments are active.
mysql_host | 2021-03-16  1:43:32 0 [Note] InnoDB: Creating shared tablespace for temporary tables
mysql_host | 2021-03-16  1:43:32 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
mysql_host | 2021-03-16  1:43:32 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
mysql_host | 2021-03-16  1:43:32 0 [Note] InnoDB: 10.5.9 started; log sequence number 45142; transaction id 20
mysql_host | 2021-03-16  1:43:32 0 [Note] Plugin 'FEEDBACK' is disabled.
mysql_host | 2021-03-16  1:43:32 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
mysql_host | 2021-03-16  1:43:32 0 [Note] InnoDB: Buffer pool(s) load completed at 210316  1:43:32
mysql_host | 2021-03-16  1:43:33 0 [Note] Server socket created on IP: '::'.
mysql_host | 2021-03-16  1:43:33 0 [Warning] 'proxies_priv' entry '@% root@35db7c5d0517' ignored in --skip-name-resolve mode.
mysql_host | 2021-03-16  1:43:33 0 [Note] Reading of all Master_info entries succeeded
mysql_host | 2021-03-16  1:43:33 0 [Note] Added new Master_info '' to hash table
mysql_host | 2021-03-16  1:43:33 0 [Note] mysqld: ready for connections.
mysql_host | Version: '10.5.9-MariaDB-1:10.5.9+maria~focal'  socket: '/run/mysqld/mysqld.sock'  port: 3306  mariadb.org binary distribution


php_host | 172.16.238.3 - - [16/Mar/2021:01:49:57 +0000] "GET / HTTP/1.1" 200 358 "-" "curl/7.29.0"



 

Monday, 8 March 2021

Ansible when condition examples

 Task: Install http package using yum module and load services, enable services, start services when Operating system is "CentOS"

[osboxes@master ansible-playbooks]$ cat conditions.yml
---
- name: Install packages
  hosts: all
  become: true
  become_user: root
  tasks:
    - name: Install httpd
      yum:
        name: httpd
        state: present

    - name: load apache service
      systemd: daemon_reload=yes name=httpd
      when: (ansible_distribution == "CentOS")

    - name: enable apache service
      service: name=httpd enabled=yes
      when: (ansible_distribution == "CentOS")

    - name: start apache service
      service: name=httpd state=started
      when: (ansible_distribution == "CentOS")


[osboxes@master ansible-playbooks]$ ansible-playbook conditions.yml -i inventory.txt --syntax-check

playbook: conditions.yml

[osboxes@master ansible-playbooks]$ ansible-playbook conditions.yml -i inventory.txt

PLAY [Install packages] *************************************************************************************************************************************

TASK [Gathering Facts] **************************************************************************************************************************************
ok: [192.168.1.182]

TASK [Install httpd] ****************************************************************************************************************************************
ok: [192.168.1.182]

TASK [load apache service] **********************************************************************************************************************************
changed: [192.168.1.182]

TASK [enable apache service] ********************************************************************************************************************************
ok: [192.168.1.182]

TASK [start apache service] *********************************************************************************************************************************
changed: [192.168.1.182]

PLAY RECAP **************************************************************************************************************************************************
192.168.1.182              : ok=5    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Puppet Setup NTP Server

 Task:

While troubleshooting one of the issue on app servers in Stratos Datacenter DevOps team identified the root cause that the time isn't synchronized properly among all app servers which cause issues sometimes. So team has decided to use a specific time server for all app servers so that they all remain in sync. This task needs to be done using Puppet so as per details mentioned below please compete the task:

Create a puppet programming file apps.pp under /etc/puppetlabs/code/environments/production/manifests directory on puppet master node i.e on Jump Server. Within the programming file define a custom class ntpconfig to install and configure ntp server on all app servers.

Also add NTP Server server 1.africa.pool.ntp.org in default configuration file on all app servers.

Please note that do not try to start/restart/stop ntp service as we already have a scheduled restart for this service tonight and we don't want these changes to be applied right now.

Note: Please perform this task using apps.pp only, do not try to create any separate inventory file.

Step 1) Create puppet class file apps.pp on jump host where puppet master is running

root@jump_host /etc/puppetlabs/code/environments/production/manifests# cat apps.pp 
class ntpconfig {
#Installing NTP Package
package {"ntp":
        ensure => 'present',
        }
#Add content to ntp configuration file
file {"/etc/ntp.conf":
     ensure => "present",
     content => "server 1.africa.pool.ntp.org",
     }
#Start NTP Services
service {"ntpd":
        ensure => "running",
        }

}

node 'stapp01.stratos.xfusioncorp.com', 'stapp02.stratos.xfusioncorp.com', 'stapp03.stratos.xfusioncorp.com' {
include ntpconfig
}

Step 2) Validate the syntax 

root@jump_host /etc/puppetlabs/code/environments/production/manifests# puppet parser validate apps.pp

Step 3) Run puppet agent -tv on app server 1, Server 2 and Server 3 and validate if ntp server is running.

[root@stapp01 ~]# puppet agent -tv
[root@stapp01 ~]# ps -ef | grep ntp
ntp        413     1  0 00:50 ?        00:00:00 /usr/sbin/ntpd -u ntp:ntp -g
root       416   164  0 00:50 pts/0    00:00:00 grep --color=auto ntp

[root@stapp02 ~]# puppet agent -tv
[root@stapp02 ~]# ps -ef | grep ntp
ntp        413     1  0 00:50 ?        00:00:00 /usr/sbin/ntpd -u ntp:ntp -g
root       416   164  0 00:50 pts/0    00:00:00 grep --color=auto ntp

[root@stapp03 ~]# puppet agent -tv
[root@stapp03 ~]# ps -ef | grep ntp
ntp        413     1  0 00:50 ?        00:00:00 /usr/sbin/ntpd -u ntp:ntp -g
root       416   164  0 00:50 pts/0    00:00:00 grep --color=auto ntp


Saturday, 6 March 2021

Rolling Updates in Kunbernetes

 Task:

We have an application running on Kubernetes cluster using nginx web server. The Nautilus application development team has pushed some of the latest features to prod branch and those need be deployed. The Nautilus DevOps team has created an image nginx:1.18 with latest changes.

Perform a rolling update for this application and incorporate nginx:1.18 image. The deployment name is nginx-deployment

Make sure all pods are up and running after the update.
Note: The kubectl utility on jump_host has been configured to work with the kubernetes cluster.

Step 1) Check if the deployment nginx-deployment is present

thor@jump_host ~$ kubectl get deployment 
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           8m19s

Step 2) Check if the pods are running

thor@jump_host ~$ kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-74fb588559-7447s   1/1     Running   0          8m37s
nginx-deployment-74fb588559-ch6mm   1/1     Running   0          8m37s
nginx-deployment-74fb588559-g9cm7   1/1     Running   0          8m37s

Step 3) Check what deployment version is running on the container

thor@jump_host ~$ kubectl get deployment -o wide
NAME               READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS        IMAGES       SELECTOR
nginx-deployment   3/3     3            3           8m56s   nginx-container   nginx:1.16   app=nginx-app

Step 4) Update the image version to nginx:1.18 from nginx:1.16

thor@jump_host ~$ kubectl set image deployment/nginx-deployment nginx-container=nginx:1.18 --record=true
deployment.apps/nginx-deployment image updated

Step 5)  Check the status of the pods and wait until they come back to running state

thor@jump_host ~$ kubectl get pods 
NAME                                READY   STATUS              RESTARTS   AGE
nginx-deployment-74fb588559-7447s   1/1     Running             0          14m
nginx-deployment-74fb588559-ch6mm   0/1     Terminating         0          14m
nginx-deployment-74fb588559-g9cm7   1/1     Running             0          14m
nginx-deployment-7b6877b9b5-6qcw6   0/1     ContainerCreating   0          2s
nginx-deployment-7b6877b9b5-l8v6c   1/1     Running             0          9s

thor@jump_host ~$ kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-7b6877b9b5-6qcw6   1/1     Running   0          46s
nginx-deployment-7b6877b9b5-776br   1/1     Running   0          44s
nginx-deployment-7b6877b9b5-l8v6c   1/1     Running   0          53s

thor@jump_host ~$ kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE    IP           NODE     NOMINATED NODE   READINESS GATES
nginx-deployment-7b6877b9b5-6qcw6   1/1     Running   0          109s   10.244.1.7   node01   <none>           <none>
nginx-deployment-7b6877b9b5-776br   1/1     Running   0          107s   10.244.1.8   node01   <none>           <none>
nginx-deployment-7b6877b9b5-l8v6c   1/1     Running   0          116s   10.244.1.6   node01   <none>           <none>

Step 6) Check the rollout history 

thor@jump_host ~$ kubectl rollout history deployment 
deployment.apps/nginx-deployment 
REVISION  CHANGE-CAUSE
1         <none>
2         kubectl set image deployment/nginx-deployment nginx-container=nginx:1.18 --record=true

Step 7) Check the image version now

thor@jump_host ~$ kubectl get deployment -o wide
NAME               READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS        IMAGES       SELECTOR
nginx-deployment   3/3     3            3           16m   nginx-container   nginx:1.18   app=nginx-app





Friday, 5 March 2021

GIT - Working with Remote repository GITHUB

Setup SSH Authentication with GITHUB Repository

Generate ssh keys on your linux machine

[osboxes@master ansible-playbooks]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/osboxes/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/osboxes/.ssh/id_rsa.
Your public key has been saved in /home/osboxes/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:iDSkhURVA9g/UILBIkV4mAdjdkr6QpF2iHFMhCPsrqk osboxes@linuxhost
The key's randomart image is:
+---[RSA 3072]----+
|*^%O*++          |
|/B%=o. .         |
|*Bo oo           |
|.o . oo.         |
|o . . ..S        |
| o               |
|..               |
|o                |
|E                |
+----[SHA256]-----+

[osboxes@master ansible-playbooks]$ cat ~/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHsN99astKP+jnHdSu6EtiY072Rllo4ILiZ4wwU5KISwjKVnUy9XGjVmMbFdQAJOczBRtw7AVmuo1vh5ieKa+BRvdJe1Sa7fZQVyHWqKxk+wkA1Mt6sSIkz3zzdte8hE9Ojmuqrqw3evjcwBywzf2Tz03JqSN2hhaXeQl8eK/DbO4y+NQXM2nOhVAGhpj1JWSCwXS9a1hWBF2OSHpJsmvcqVyDDWZvpjDoAwvJjd+n2XuZqyns/PVTy1WPq7AfWBigiGI8OTi/K97MKtDrSv0JgjX/aTVz5sirWxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxqIzrPkq+MfpKDMtORfjjbzIdwuRgngZLUtPDI1EywCy2gNhaWwoeG3CTQ/aR1OsMDs/74u/S5ECcQTYW7Vpb/slQhm8I+yvyfqUky7M9zuLMaMZyfRwdawrMExtU= osboxes@master

Create a new github account if you don't have one 

https://github.com

Go to Settings -> SSH and GPG keys -> SSH keys -> Click on New SSH Key

Copy the id_rsa.pub key from linux machine and paste it here 











Enter the following command on your Linux Machine

[osboxes@master ansible-playbooks]$ ssh -T git@github.com

The authenticity of host 'github.com (140.82.114.4)' can't be established.
RSA key fingerprint is SHA256:nThbg6kXUpJWGl7E1IGOCspRomTxdCARLviKw6E5SY8.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'github.com,140.82.114.4' (RSA) to the list of known hosts.
Hi pavanbandaru! You've successfully authenticated, but GitHub does not provide shell access.

Create a repository on your local system and commit the changes to local repository before pushing your code to GITHUB

[osboxes@master ansible-playbooks]$ git init
Initialized empty Git repository in /home/osboxes/ansible-playbooks/.git/
[osboxes@master ansible-playbooks]$ git add .
[osboxes@master ansible-playbooks]$ git commit -m "My First commit to ansible playbooks"
[master (root-commit) e8f579c] My First commit to ansible playbooks
 15 files changed, 236 insertions(+)
 create mode 100644 addusers.yml
 create mode 100644 apache-install.yml
 create mode 100644 create_dir.yml
 create mode 100644 dict.yml
 create mode 100644 install-packages.yml
 create mode 100644 install-packages_1.yml
 create mode 100644 inventory-loops.yml
 create mode 100644 inventory.txt
 create mode 100644 iterating-loops.yml
 create mode 100644 limit-output.yml
 create mode 100644 linux-user.yml
 create mode 100644 mynewplaybook
 create mode 100644 new_inventory.txt
 create mode 100644 password.txt
 create mode 100644 register.yml
[osboxes@master ansible-playbooks]$ git status
On branch master
nothing to commit, working tree clean

Create a new repository on GITHUB and click on Code -> SSH and copy the URL link.












Go to your Linux machine and execute the following commands

[osboxes@master ansible-playbooks]$ git remote add origin git@github.com:pavanbandaru/ansible-playbooks.git

[osboxes@master ansible-playbooks]$ git remote -v
origin  git@github.com:pavanbandaru/ansible-playbooks.git (fetch)
origin  git@github.com:pavanbandaru/ansible-playbooks.git (push)

[osboxes@master ansible-playbooks]$ git push -u origin master

Warning: Permanently added the RSA host key for IP address '140.82.112.4' to the list of known hosts.
Enumerating objects: 17, done.
Counting objects: 100% (17/17), done.
Compressing objects: 100% (15/15), done.
Writing objects: 100% (17/17), 3.34 KiB | 285.00 KiB/s, done.
Total 17 (delta 2), reused 0 (delta 0), pack-reused 0
remote: Resolving deltas: 100% (2/2), done.
remote:
remote: Create a pull request for 'master' on GitHub by visiting:
remote:      https://github.com/pavanbandaru/ansible-playbooks/pull/new/master
remote:
To github.com:pavanbandaru/ansible-playbooks.git
 * [new branch]      master -> master
Branch 'master' set up to track remote branch 'master' from 'origin'.


Go to GITHUB and verify the changes




Make some changes to our local repository and push the changes to remote repository
( I have made changes to inventory-loops.yml file and I am going to push these changes to remote repository)

Check the status

[osboxes@master ansible-playbooks]$ git status
On branch master
Your branch is up to date with 'origin/master'.

Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git restore <file>..." to discard changes in working directory)
        modified:   inventory-loops.yml

no changes added to commit (use "git add" and/or "git commit -a")

Check the difference

[osboxes@master ansible-playbooks]$ git diff
diff --git a/inventory-loops.yml b/inventory-loops.yml
index dba93c1..d50904f 100644
--- a/inventory-loops.yml
+++ b/inventory-loops.yml
@@ -1,5 +1,5 @@
 ---
-- name:  looping over inventory
+- name:  looping over inventory example
   hosts: all
   become: true
   become_user: root

Add file and commit the changes

[osboxes@master ansible-playbooks]$ git commit -am "updated inventory-loops.yml"
[master 10207db] updated inventory-loops.yml
 1 file changed, 1 insertion(+), 1 deletion(-)

Pull the remote repository to local before pushing into remote as a best practice. Changes would have been made to the remote repository by others. So to avoid conflicts, pull the repository before pushing it to remote. 

[osboxes@master ansible-playbooks]$ git pull origin master
warning: Pulling without specifying how to reconcile divergent branches is
discouraged. You can squelch this message by running one of the following
commands sometime before your next pull:

  git config pull.rebase false  # merge (the default strategy)
  git config pull.rebase true   # rebase
  git config pull.ff only       # fast-forward only

You can replace "git config" with "git config --global" to set a default
preference for all repositories. You can also pass --rebase, --no-rebase,
or --ff-only on the command line to override the configured default per
invocation.

Warning: Permanently added the RSA host key for IP address '140.82.114.3' to the list of known hosts.
From github.com:pavanbandaru/ansible-playbooks
 * branch            master     -> FETCH_HEAD
Already up to date.

Now push the changes to remote repository

[osboxes@master ansible-playbooks]$ git push origin master
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 314 bytes | 314.00 KiB/s, done.
Total 3 (delta 2), reused 0 (delta 0), pack-reused 0
remote: Resolving deltas: 100% (2/2), completed with 2 local objects.
To github.com:pavanbandaru/ansible-playbooks.git
   e8f579c..10207db  master -> master

We can see the changes on the remote repository