OpenShift v3 (Mega Tutorial) – Part 2 – Ansible

OpenShift

OpenShift v3 – Ansible

Ansible is a radically simple IT automation engine that automates cloud provisioning,configuration management, application deployment, intra-service orchestration, and many other IT needs.

Designed for multi-tier deployments since day one, Ansible models your IT infrastructure by describing how all of your systems inter-relate, rather than just managing one system at a time.

It uses no agents and no additional custom security infrastructure, so it’s easy to deploy – and most importantly, it uses a very simple language (YAML, in the form of Ansible Playbooks) that allow you to describe your automation jobs in a way that approaches plain English.

On this page, we’ll give you a really quick overview so you can see things in context. For more detail, hop over to docs.ansible.com.

 

Ansible, first contact

Ansible is a tool that usually uses a SSH connection between the host Manager, the one that has ansible installed, and the managed hosts, to run deployment and management tasks. I has several moudles that define tasks and attributes. In order to reduce the number of commands it has some “recipes” called playbooks to write tasks to run. At the beginning of every playbook wee have some options to determine how and where the tasks will be run. This playbooks are written in a language called YAML (is a recursive acronym “YAML Ain’t Another Markup Language”). Ansible is also “idempotent”, it means if a task has been done and we run the playbook again, there will not change anything because the task is already done.

Ansible needs an inventory file. This file defines the hosts involved in ansible execution, which can be separated in groups. The default inventory file for Ansible is in /etc/ansible/hosts

This is an example of an inventory file I am going to create in /tmp/inventory:

[master]
master.osc.test

[nodes]
node-1.osc.test
node-2.osc.test

[all:children]
master
nodes

After that configuration we can use the ping module will connect to the hosts, verifies connection can be done and python is available. If everything goes well it will return a pong

[root@master tmp]# ansible -i inventory all -m ping
master.osc.test | SUCCESS => {
 "changed": false, 
 "ping": "pong"
}
node-1.osc.test | SUCCESS => {
 "changed": false, 
 "ping": "pong"
}
node-2.osc.test | SUCCESS => {
 "changed": false, 
 "ping": "pong"
}
[root@master tmp]# ansible -i inventory nodes -m ping
node-1.osc.test | SUCCESS => {
 "changed": false, 
 "ping": "pong"
}
node-2.osc.test | SUCCESS => {
 "changed": false, 
 "ping": "pong"
}

Ansible has several modules we can use to do several tasks. The yum module to manage Red Hat packages, hostname to change hostname on systems, or lineinfile ensures a line exists inside a file. But there is lots of modules available, even is possible to create custom modules. You have more information in Ansible documentation.

Preparing NFS for registry

As I want to use a persistent registry, I am going to create a localhost only NFS server in Master, exporting the second drive.

First of all install NFS

yum install -y nfs-utils

Now I am going to create a partition and mount it in /exports/registry

cat <<EOF |fdisk /dev/vdb
n
p
1


w
EOF
mkdir -p /exports/registry
chown nfsnobody:nfsnobody /exports/registry
chmod 0770 /exports/registry
mkfs.ext4 /dev/vdb1
echo -e "/dev/vdb1\t/exports/registry\text4\tdefaults\t1\t2">>/etc/fstab mount -a cat <<EOF>/etc/exports /exports/registry 127.0.0.1(rw,sync) EOF systemctl start nfs-server systemctl enable nfs-server

Installing OpenShift

For an easy installation OpenShift uses Ansible; it has several roles (task groups) referenced on the preconfigured playbooks. This playbooks use an inventory file that we are going to create with the hosts over we will install OpenShift and several options we use to modify how installation run.

So the first step to install OpenShift will be create the inventory file. We can modify ansible’s default inventory file /etc/ansible/hosts or we can create a different one and use it. That is what I will do.

# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=root

# OpenShift deployment type origin or enterprise
deployment_type=origin

# OpenShift applications subdomain
openshift_master_default_subdomain=apps.osc.test

# OpenShift persistent registry over NFS
openshift_hosted_registry_storage_kind=nfs
openshift_hosted_registry_storage_access_modes=['ReadWriteMany']
openshift_hosted_registry_storage_host=127.0.0.1
openshift_hosted_registry_storage_nfs_directory=/exports
openshift_hosted_registry_storage_volume_name=registry
openshift_hosted_registry_storage_volume_size=8Gi

# host group for masters
[masters]
master.osc.test

# host group for nodes, includes region info
[nodes]
master.osc.test openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_schedulable=True
node-1.osc.test openshift_node_labels="{'region': 'primary', 'zone': 'default'}"
node-2.osc.test openshift_node_labels="{'region': 'primary', 'zone': 'default'}"

The [OSEv3:children] block is a group of the blocks [masters] and [nodes], thet contains the necessary hosts for OpenShift. Th block [OSEv3:vars] contains the variables that ansible will use for the necessary playbooks. I have been added a variable openshift_master_default_subdomain to set the value used by the “router” deployed on Master to route the requests to the correspondent applications. Also I defined variables to configure the registry in order to use persistent storage. I have set openshift_schedulable=True to automatically deploy the router and the registry pods, after installation we set the Master as unschedulable. By default any node inside [master] block is set as unschedulable. It is not needed for the master to be in [nodes] block, but if we want to access the pods SDN (Software Designed Network) it must be included.

There is some inventory examples in openshift-ansible/inventory/byo.

Lets go

ansible-playbook -i openshift-inventory openshift-ansible/playbooks/byo/config.yml

After a while we will see something like this:

PLAY RECAP *********************************************************************
localhost : ok=12 changed=6 unreachable=0 failed=0 
master.osc.test : ok=414 changed=73 unreachable=0 failed=0 
node-1.osc.test : ok=144 changed=26 unreachable=0 failed=0 
node-2.osc.test : ok=144 changed=27 unreachable=0 failed=0 

Now we are going to verify that router and registry pods are dunning


[root@master ~]# oc get pods
NAME                     READY   STATUS    RESTARTS    AGE
docker-registry-2-44zl6  1/1     Running   0           17m
router-1-t46tk           1/1     Running   0           17m

Set Master as unschedulables

oadm manage-node master.osc.test --schedulable=false

Lets check nodes are ready

oc get nodes

If there are the three nodes, everything is ready to start playing with our OpenShift cluster.

Next post will shoe the command line client, common commands, configuration files for master and noes and we will configure Web Console access. See you.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.