OpenShift Origin
It’s time to configure the first node.
After configuring broker system, we’ll spend the rest of the tutrial configuring the node.
So, let’s go.
Assuming we have the following net schema:
- Broker: IP=10.10.10.2/24 (via DHCP)
- Node1: IP=10.10.10.10.1/24 (via DHCP)
Let’s create the node1 DNS entry, and for this we’ll use the tools available on the broker system.
oo-register-dns -s 10.10.10.2 -h node1 -d dmartin.es -n 10.10.10.101 -k /var/named/dmartin.es.key
Let’s take al look to the options:
- -s defines the DNS server address
- -h defines the hostname to register
- -d defines the domain in which it is recorded
- -n defines the host IP address
- -k the key for the DNS authentication
Now we’ll transfer the SSH key to allow the broker can send files to the nodes, and let’s verify that we can connect.
ssh-copy-id -i /etc/openshift/rsync_id_rsa.pub node1.dmartin.es ssh -i /etc/openshift/rsync_id_rsa.pub node1.dmartin.es date
Node1 Configuration
It’s the turn to configure the node1 network.
If we have static configuration, only we need to point to our DNS server.
I’ll show how to configure a DHCP configuration, and how to avoid that DHCP server gives us another DNS server and another domain.
cat <<EOF> /etc/sysconfig/network-scripts/ifcfg-eth0 NAME="eth0" TYPE="Ethernet" BOOTPROTO="dhcp" PEERDNS="no" DNS1="10.10.10.2" EOF
Finally let’s modify hostname.
hostnamectl broker.dmartin.es cat /etc/hostname
MCollective
Installing the required package
yum install -y openshift-origin-msg-node-mcollective
After that we will modify the MCollective config file
cat </etc/mcollective/server.cfg topicprefix = /topic/ main_collective = mcollective collectives = mcollective libdir = /usr/libexec/mcollective logfile = /var/log/openshift/node/mcollective.log loglevel = debug daemonize = 0 direct_addressing = 1 registerinterval = 30 # Plugins securityprovider = psk plugin.psk = unset connector = activemq plugin.activemq.pool.size = 1 plugin.activemq.pool.1.host = broker.dmartin.es plugin.activemq.pool.1.port = 61613 plugin.activemq.pool.1.user = mcollective plugin.activemq.pool.1.password = marionette # Facts factsource = yaml plugin.yaml = /etc/mcollective/facts.yaml EOF
To enable MCollective as a Fedora 19 service with systemd, we need create the service file, reload systemd daemon, start the service and enable it at boot.
cat </usr/lib/systemd/system/mcollective.service [Unit] Description=The Marionette Collective After=network.target [Service] Type=simple StandardOutput=syslog StandardError=syslog ExecStart=/usr/sbin/mcollectived --config=/etc/mcollective/server.cfg --pidfile=/var/run/mcollective.pid ExecReload=/bin/kill -USR1 \$MAINPID PIDFile=/var/run/mcollective.pid KillMode=process [Install] WantedBy=multi-user.target EOF systemctl --system daemon-reload systemctl enable mcollective.service systemctl start mcollective.service
Now, MCollective must be working and we can test it from broker system running the mco ping command, that returns all available nodes (we have only one node configured, so we’ll see only one)
Installing packages
We need to install some packages in the node system
yum install -y rubygem-openshift-origin-node rubygem-passenger-native openshift-origin-port-proxy openshift-origin-node-util rubygem-openshift-origin-container-selinux httpd
Let’s configure the routing for HTTP/HTTPS requests, and let’s enable the access to the cartridges information.
echo "ServerName node1.dmartin.es" > /etc/httpd/conf.d/000001_openshift_origin_node_servername.conf cat << EOF > /etc/httpd/conf.d/cartridge_files.confRequire all granted EOF
Now we’ll install the Apache Virtual Hosts and NodeJS WebSockets front-ends.
yum install -y rubygem-openshift-origin-frontend-apache-mod-rewrite yum install -y openshift-origin-node-proxy rubygem-openshift-origin-frontend-nodejs-websocket
Finally we’ll start the needed services and let’s configure the firewall
systemctl start openshift-node-web-proxy systemctl start httpd systemctl enable openshift-node-web-proxy systemctl enable httpd iptables -N rhc-app-comm iptables -I INPUT 4 -m tcp -p tcp --dport 35531:65535 -m state --state NEW -j ACCEPT iptables -I INPUT 5 -j rhc-app-comm iptables -I OUTPUT 1 -j rhc-app-comm /usr/libexec/iptables/iptables.init save
Installing the cartridges
One of the most awaited parts of the tutorial, No doubt, the time to define which cartridges we’ll have available in the node. For that, we can list which cartridges we have available to install by running:
yum list openshift-origin-cartridge\*
If we can install them all;-P but of course we should choose only those we want.
yum install -y openshift-origin-cartridge\*
Once installed, let’s make available to the node1 the cartridges.
/usr/sbin/oo-admin-cartridge --recursive -a install -s /usr/libexec/openshift/cartridges/
Firewall and services
The last step of this post will be the firewall configuration and make available the services at boot.
lokkit --service=ssh lokkit --service=https lokkit --service=http lokkit --port=8000:tcp lokkit --port=8443:tcp systemctl enable network.service systemctl enable sshd.service systemctl enable oddjobd.service
And here this post. In the next we’ll configure the SO usage limits, GIT over SSH, and more about SELinux.
You can access here.
See you soon.