博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Build Your Own PaaS on RHEL 6
阅读量:2439 次
发布时间:2019-05-10

本文共 67636 字,大约阅读时间需要 225 分钟。

Contents

Introduction

This document describes how to create a private PaaS service using OpenShift. It makes a number of simplifying assumptions about the environment of the service. In particular, we will assume that the underlying platform is Red Hat Enterprise Linux 6.3 with Ruby 1.8. One may have to adjust one's configuration for a different environment.

The document is organized into three parts. The first part will provide the reader with an overview of the components of an OpenShift installation, how those components are organized, and how they communicate. In particular, the reader will learn about the broker and node hosts and the software that runs on these hosts. The second part will take the reader, step-by-step, through the process of installing and configuring a broker and one or more nodes. Finally, the last part will explain and demonstrate the operation of the new installation using the rhctool .

Part I: Background

There are several components that are involved in the OpenShift PaaS installation. This section will cover the primary components in addition to the various configurations that will be accomplished throughout this guide. All the diagrams you see in the subsequent sections will depict elements described in the following legend.

Organization and Components of an OpenShift PaaS

An OpenShift PaaS installation comprises two logical types of hosts: a broker and one or more nodes. The broker handles the creation and management of user applications, including authenticating users via an authentication service and communication with appropriate nodes. The nodes run the user applications in contained environments called gears. The broker queries and controls nodes using a messaging service.

Communication Mechanisms

Communication from any external clients (e.g. client tools or the OpenShift console) is done through the REST API which is hosted by the broker. The broker then communicates through the messaging-service component to the nodes. MCollective is utilized to facilitate querying a set of nodes as well as communicating with individual nodes in a secure manner.

State Management

The broker itself must manage various persistent data for the PaaS. To accomplish this, the broker utilizes three distinct interfaces that represent the complete state of the PaaS. The reason for the three interfaces is that each datastore is pluggable and each type of data is usually managed by a separate system. Application data is separated into the following sections:

  • State: General application state; this data is stored using MongoDB by default.
  • DNS: Dynamic DNS state; this data is handled by BIND by default.
  • Auth: User state for authentication and authorization; this state is stored using LDAP or a Kerberos KDC by default.

Redundancy

OpenShift has been designed with redundancy in mind, and each architectural component can be set up in a redundant manner. The broker applications themselves are stateless and can be set up behind a simple HTTP load balancer. The messaging tier is also stateless, and MCollective can be configured to use multiple ActiveMQ endpoints. Multiple MongoDB instances can be combined into a replica set for fault tolerance and high availability.

For simplicity the basic installation demonstrated below will not implement redundancy.

Installation Topology

This guide will focus on providing a functional installation but will not set up all components to provide redundancy. Adding redundancy at the appropriate layers will be a supplemental guide since it varies by use case. This guide will install the broker, datastores, and messaging components on the same machine instance. The node will be setup on a separate machine instance. The resulting system topology is shown in the following diagram.

Further documentation on the architecture of OpenShift is in the article .

Part II: Installation and Configuration

The instructions in this section describe how to install and configure a basic OpenShift PaaS environment with a broker and one or more nodes. These instructions are intended for Linux administrators and developers with intermediate level experience. They are extremely detailed in order to demonstrate the variety of settings you may configure and where to do so.

In the following steps, it is recommended that you back up any files that you change by running (for example) cp foo foo.orig before editing the file foo.

Preliminary Information

Before proceeding with the installation and configuration, this section provides some basic information and requirements for installing OpenShift PaaS.

Supported Operating Systems

This installation relies on a current RHEL 6.x installation as its base. We recommend installing the "Basic Server" configuration for a base install, though others should work too.

These directions likely work on Enterprise Linux rebuilds such as Scientific Linux or CentOS as well; we invite users of these rebuilds to comment on any relevant differences specific to their OSes.

Hardware Requirements

Although the instructions in this document have been primarily tested on KVM virtual machines, the instructions are applicable to other environments.

Below are the hardware requirements for all hosts, whether configured as a broker or as a node. The hardware requirements are applicable for both physical and virtual environments.

  • Minimum 1 GB of memory
  • Minimum 8 GB of hard disk space
  • x86_64 architecture
  • Network connectivity

Service Parameters

In this example of a basic OpenShift installation, the broker and node are configured with the following parameters:

  • Service Domain: example.com
  • Broker IP address: dynamic (from DHCP)
  • Broker host name: broker.example.com
  • Node 0 IP address: dynamic (from DHCP)
  • Node 0 host name: node.example.com
  • Data Store Service: MongoDB
  • Authentication Service: Basic Authentication via httpd mod_auth_basic
  • DNS Service: BIND
    • IP address: dynamic (from DHCP)
    • Zone: example.com (same as Service Domain)
    • Domain Suffix: example.com (same as Service Domain)
  • Messaging Service: MCollective using ActiveMQ

All of these parameters can be customized as necessary. As detailed in the instructions, the domain name and host names can be easily modified by editing appropriate configuration files. The selection of data-store service, authentication service, and DNS server are implemented as plug-ins to the broker.

Note that while DHCP is supported and assumed in this document, dynamic re-assignment of IP addresses is not supported and may cause problems.

DNS Information

The OpenShift PaaS service publishes the host names of new applications to DNS. The DNS update service negotiates with the owner of a domain so that a sub domain can be allocated. It also establishes authentication credentials to allow automatic updates. The sample configuration uses a private DNS service to allow the OpenShift PaaS service to publish new host names without requiring access to an official DNS service. The application host names will only be visible on the OpenShift PaaS hosts and any workstation configured to use the configured DNS service, unless properly delegated by a public DNS service.

The creation of a private DNS service and establishing a delegation agreement with your IT department are outside the scope of this document. Each organization has its own policies and procedures for managing DNS services. If you want to make the OpenShift PaaS service available in any way, you will have to discuss the delegation requirements at your site with the appropriate personnel.

Kickstart / Install Script

For your convenience, a sample kickstart script for configuring a host as a broker or as a node (or as both) is available at <>. Note that you will need to alter it to at least enable your RHEL 6 subscription or yum repository during the %post script, and likely other parameters (explained in the script header) as well.

You may also extract the %post section of the kickstart script as a bash script in order to apply the steps against a pre-installed RHEL 6 image. A reboot will be required after running the script in this fashion (which kickstart would automatically do).

The steps in this document explain the actions of the script. The steps and the script are independent in the sense that you can obtain a complete broker or node host just by following the steps manually or just by running the kickstart script. For your convenience, we will point to the corresponding part of the kickstart script for each section in the steps below.

Base Operating System Configuration

The installation and configuration of the base operating system for an OpenShift PaaS deployment is quite straight forward. The sample installation detailed in this document assumes the operating system to be Red Hat Enterprise Linux 6.3 Server on Host 1 and Host 2.

It is assumed that the base operating system has been configured with RHEL Server entitlement and JBoss EAP6 for the the JBoss EAP6/EWS 1.0 cartridges.

The following steps are common to both Host 1 and Host 2 and should be performed on both hosts.

Warning: disable non-RHEL repositories

Numerous OpenShift test installations have been confounded by packages being installed from third-party repositories and products like EPEL or Puppet. The differences in behavior can cause subtle and perplexing problems with the operation of OpenShift that waste a lot of troubleshooting time. Therefore, please ensure that your base system image and repositories include only packages from RHEL 6. Disable any third-party repositories during installation. Even the unsupported RHEL Optional channel should not be enabled (although this hasn't caused any known problems yet).

Setting up Time Synchronization

OpenShift requires NTP to synchronize the system and hardware clocks. This synchronization is necessary for communication between the broker and node hosts; if the clocks are too far out of synchronization, MCollective will drop messages. It is also helpful to have accurate timestamps on files and in log file entries.

On the host, use the ntpdate command to set the system clock (use whatever NTP servers are appropriate for your environment):

ntpdate clock.redhat.com

You will also want to configure ntpd via /etc/ntp.conf to keep the clock synchronized during operation.

If you get the error message "the NTP socket is in use, exiting," then ntpd is already running, but the clock may not be synchronized if it starts too far off. You should stop the service while executing this command.

service ntpd stopntpdate clock.redhat.comservice ntpd start

If you are installing on physical hardware, use the hwclock command to synchronize the hardware clock to the system clock. If you are running on a virtual machine, such as an Amazon EC2 instance, skip this step. Otherwise, enter the following command:

hwclock --systohc

The above steps are performed by the synchronize_clock function in the kickstart script.

Enabling Remote Administration

It may be desirable to install SSH keys for the root user so that you can interact with the hosts remotely from your personal workstation. First, ensure that root's ssh configuration directory exists and has the correct permissions on the host:

mkdir /root/.sshchmod 700 /root/.ssh

On your workstation, you can either use the ssh-keygen command to generate a new keypair, or use an existing public key. In either case, edit the /root/.ssh/authorized_keys file on the host and append the public key, or use the ssh-copy-idcommand to do the same. For example, on your local workstation, you can issue the following command:

ssh-copy-id root@10.0.0.1

Replace "10.0.0.1" with the actual IP address of the broker in the above command.

The above steps are performed by the install_ssh_keys function in the kickstart script.

Setting up Host 1 as a Broker with Related Components

This section describes how to install and configure the first OpenShift host, which will be running the Broker, MongoDB, ActiveMQ, and BIND. Each logical component is broken out into an individual section.

You should perform all of the procedures in this section after you have installed and configured the base operating system and before you start installing and configuring any node hosts.

Setting up the Required Repositories

OpenShift Origin currently relies on many packages that are not in Red Hat Enterprise Linux and must be retrieved from OpenShift repositories.

Setting up the OpenShift Infrastructure Repository

Host 1 requires packages from the OpenShift Infrastructure repository for the broker and related packages. To set up the repository:

1. Create the following file:

/etc/yum.repos.d/openshift-infrastructure.repo

2. Add the following content:

[openshift_infrastructure]name=OpenShift Infrastructurebaseurl=https://mirror.openshift.com/pub/origin-server/nightly/enterprise/2012-11-15//Infrastructure/x86_64/os/enabled=1gpgcheck=0

3. Save and close the file.

The above steps are performed by the configure_broker_repo function in the kickstart script.

RUNNING YUM UPDATE

To update all of the base packages needed for these instructions, run the following command.

yum update

It is important to do this to ensure at least the selinux-policy package is updated, as OpenShift relies on a recent update to this package.

In the kickstart script, this step is performed after configuring the repositories.

Setting up BIND / DNS

In this section, we will configure BIND on the broker. This is really only for the purpose of getting going easily, and is probably not the configuration you will want in production. Skip this section if you have alternative arrangements for handling DNS updates from OpenShift.

If you wish to have OpenShift update an existing BIND server in your infrastructure, it should be fairly apparent from the ensuing setup how to enable that. If you are using something different, the DNS update plugin can be swapped out; Red Hat does not currently distribute any alternative plugins, but supported customers can engage our professional services, or an experienced administrator can just use the BIND plugin code as a model for writing an alternative plugin.

INSTALLING

To install all of the packages needed for these instructions, run the following command.

yum install bind bind-utils
CONFIGURING

We will be referring frequently to the domain name with which we are configuring this OpenShift installation, so let us set the$domain environment variable for easy reference:

domain=example.com

Note: You may replace "example.com" with the domain name you have chosen for this installation of OpenShift.

Next, set the $keyfile environment variable to contain the filename for a new DNSSEC key for our domain (we will create this key shortly):

keyfile=/var/named/${domain}.key

We will use the dnssec-keygen tool to generate the new DNSSEC key for the domain. Run the following commands to delete any old keys and generate a new key:

rm -vf /var/named/K${domain}*pushd /var/nameddnssec-keygen -a HMAC-MD5 -b 512 -n USER -r /dev/urandom ${domain}KEY="$(grep Key: K${domain}*.private | cut -d ' ' -f 2)"popd

Notice that we have set the $KEY environment variable to hold the newly generated key. We will use this key in a later step.

Next, we must ensure we have a key for the broker to communicate with BIND. We use the rndc-confgen command to generate the appropriate configuration files for rndc, which is the tool that the broker will use to perform this communication.

rndc-confgen -a -r /dev/urandom

We must ensure that the ownership, permissions, and SELinux context are set appropriately for this new key:

restorecon -v /etc/rndc.* /etc/named.*chown -v root:named /etc/rndc.keychmod -v 640 /etc/rndc.key

We are configuring the local BIND instance so that the broker and nodes will be able to resolve internal hostnames. However, the broker and node will still need to be able to handle requests to resolve hostnames on the broader Internet. To this end, we configure BIND to forward such requests to regular DNS servers. To this end, create the file/var/named/forwarders.conf with the following content:

forwarders { 8.8.8.8; 8.8.4.4; } ;

Note: Change the above list of forwarders as appropriate to comply with your local network's requirements.

Again, we must ensure that the permissions and SELinux context are set appropriately for the new forwarders.conf file:

restorecon -v /var/named/forwarders.confchmod -v 755 /var/named/forwarders.conf

We need to configure BIND to perform resolution for hostnames under the domain we are using for our OpenShift installation. To that end, we must create a database for the domain. The dns-bind plug-in includes an example database, which we will use as a template. Delete and create the /var/named/dynamic directory:

rm -rvf /var/named/dynamicmkdir -vp /var/named/dynamic

Now, create an initial named database in a new file named /var/named/dynamic/${domain}.db (where ${domain} is your chosen domain) using the following command (if the shell syntax is unfamiliar, see the ):

cat <
/var/named/dynamic/${domain}.db\$ORIGIN .\$TTL 1 ; 1 seconds (for testing only)${domain} IN SOA ns1.${domain}. hostmaster.${domain}. ( 2011112904 ; serial 60  ; refresh (1 minute) 15  ; retry (15 seconds) 1800  ; expire (30 minutes) 10  ; minimum (10 seconds) ) NS ns1.${domain}.\$ORIGIN ${domain}.ns1 A 127.0.0.1EOF

Next, we install the DNSSEC key for our domain. Create the file /var/named/${domain}.key (where ${domain} is your chosen domain) using the following command:

cat <
/var/named/${domain}.keykey ${domain} { algorithm HMAC-MD5; secret "${KEY}";};EOF

We need to set the permissions and SELinux contexts appropriately:

chown -Rv named:named /var/namedrestorecon -rv /var/named

We must also create a new /etc/named.conf file, as follows:

cat <
/etc/named.conf// named.conf//// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS// server as a caching only nameserver (as a localhost DNS resolver only).//// See /usr/share/doc/bind*/sample/ for example named configuration files.//options { listen-on port 53 { any; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; // set forwarding to the next nearest server (from DHCP response forward only; include "forwarders.conf";};logging { channel default_debug { file "data/named.run"; severity dynamic; };};// use the default rndc keyinclude "/etc/rndc.key"; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndc-key"; };};include "/etc/named.rfc1912.zones";include "${domain}.key";zone "${domain}" IN { type master; file "dynamic/${domain}.db"; allow-update { key ${domain} ; } ;};EOF

Set permissions and SELinux contexts appropriately:

chown -v root:named /etc/named.confrestorecon /etc/named.conf

Configuring Host 1 Name Resolution

To use the local named service to resolve host names in your domain, you now need to update the host's /etc/resolv.conf file. You also need to configure the firewall and start the named service in order to serve local and remote DNS requests for the domain.

To that end, edit /etc/resolv.conf and put the following at the top of the file, changing "10.0.0.1" to the IP address of Host 1::

nameserver 10.0.0.1

Open the firewall rules and make the service restart on reboot with:

lokkit --service=dnschkconfig named on

Use the service command to start BIND ("named") so we can perform some updates immediately:

service named start

Tell BIND about the broker using the nsupdate command to open an interactive session. "server," "update," and "send" are commands to the nsupdate command. CTRL+D closes the interactive session.

Note: Replace "broker.example.com" with the actual FQDN of the broker, and replace "10.0.0.1" with the actual IP address of the broker.

nsupdate -k ${keyfile}server 127.0.0.1update delete broker.example.com Aupdate add broker.example.com 180 A 10.0.0.1sendquit

The above steps are performed by the configure_named and update_resolv_conf functions in the kickstart script.

VERIFYING

Verify that BIND is configured correctly to resolve the broker's hostname:

dig @127.0.0.1 broker.example.com

Verify that BIND properly forwards requests for other hostnames:

dig @127.0.0.1 icann.org a

Verify that the broker is using the local BIND instance by running the following command on the broker:

dig broker.example.com

Setting up DHCP and Hostname

In this section, we will perform some system-wide network configuration on the broker. No new packages need to be installed for this step, so we will go right to configuration.

NB: We will assume in this section that the broker is using the eth0 network interface. Substitute the appropriate interface in the filenames in the instructions below.

CONFIGURING

First, we will configure the DHCP client on the broker. Modify /etc/dhcp/dhclient-eth0.conf to use the local BIND instance and assume the appropriate hostname and domain name. Edit dhclient-eth0.conf and append the following lines to the end of the file:

prepend domain-name-servers 10.0.0.1;supersede host-name "broker";supersede domain-name "example.com";

NB: Replace "10.0.0.1" with the actual IP address of the broker, and replace "broker" and "example.com" with the appropriate hostname and domain name.

Second, we need to set the hostname in /etc/sysconfig/network and using the hostname command. Edit the network file. If the file contains a line beginning with "HOSTNAME=", delete the line. Add the following line to the file:

HOSTNAME=broker.example.com

Run the hostname command:

hostname broker.example.com

NB: Replace "broker.example.com" with the actual FQDN of the broker.

The above steps are performed by the configure_network function in the kickstart script.

VERIFYING

Run the hostname command to verify the hostname of Host 1.

hostname

Setting up MongoDB

MongoDB requires several minor configuration changes to prepare it for use with OpenShift. These include setting up authentication, specifying the default database size, and creating an administrative user.

INSTALLING

To install all of the packages needed for MongoDB, run the following command:

yum install mongodb-server
CONFIGURING

To configure MongoDB to require authentication:

  1. Open the /etc/mongodb.conf file.
  2. Add the following line anywhere in the file:
    auth = true
  3. If there are any other lines beginning with "auth =", delete those lines.
  4. Save and close the file.

To configure the MongoDB default database size:

  1. Open the /etc/mongodb.conf file.
  2. Add the following line anywhere in the file:
    smallfiles = true
  3. If there are any other lines beginning with "smallfiles =", delete those lines.
  4. Save and close the file.

Open the firewall rules and make the service restart on reboot with:

chkconfig mongod on

Now start the mongo daemon:

service mongod start

The above steps are performed by the configure_datastore function in the kickstart script.

VERIFYING

Run the mongo command to ensure that you can connect to the MongoDB database:

mongo

The command starts an interactive session with the database. Press CTRL+D (the Control key with the "d" key) to leave this session and return to the command shell.

NOTE: The init script in version 2.0.2-1.el6_3 of MongoDB does not function correctly. The start and restart actions return before the daemon is ready to accept connections, and MongoDB may take time to initialize the journal. This initialization may take several minutes. If you receive "Error: couldn't connect to server 127.0.0.1" when you run the mongo command, wait and try again. You can also check the /var/log/mongodb/mongodb.log file. When MongoDB is ready, it will write "waiting for connections" in the log file. The following steps require that a database connection be established.

Setting up ActiveMQ

You need to install and configure ActiveMQ which will be used as the messaging platform to aid in communication between the broker and node hosts.

INSTALLING

To install the packages needed for ActiveMQ, run the following command:

yum install activemq
CONFIGURING

You can configure ActiveMQ by editing the /etc/activemq/activemq.xml file. Create the file using the following command:

cat <
/etc/activemq/activemq.xml
file:\${activemq.conf}/credentials.properties
EOF

Note: Replace "broker.example.com" with the actual FQDN of the broker. You are also encouraged to substitute your own passwords (and use the same in the MCollective configuration that follows).

Open the firewall rules and make the service restart on reboot with:

lokkit --port=61613:tcpchkconfig activemq on

Now start the activemq service with:

service activemq start

The above steps are performed by the configure_activemq function in the kickstart script.

As installed the ActiveMQ monitor console web service does not require authentication and will answer on any IP interface. It is important to limit access to the ActiveMQ console for security.

Two changes to the /etc/activemq/jetty.xml file enable authentication and restrict the console to the localhost interface:

sed -i -e '/name="authenticate"/s/false/true/' /etc/activemq/jetty.xmlsed -i -e '/name="port"/a
' /etc/activemq/jetty.xml

The admin user definition is set in the /etc/activemq/jetty-realm.properties file. The last line contains the default account for the admin user. It has the form:

# username: password [,rolename ...]

You need to change the password field from the default 'admin' to a password you choose.

sed -i -e '/admin:/s/admin,/badpassword,/' /etc/activemq/jetty-realm.properties

In later test examples we'll use badpassword. You need to substitute your password.

VERIFYING

Once ActiveMQ is started, you should be able to verify that it is listening for messages for the Openshift topics. It can take 60 seconds or more for the activemq daemon to finish initializing and start answering queries. First verify that the authentication is working:

curl --head --user admin:badpassword

You should see a 200 OK message followed by the remaining header lines. If you see a "401 Unauthorized" message, then your username/password pair is incorrect.

Next check that the service is returning a topic list.

curl --user admin:badpassword --silent http://localhost:8161/admin/xml/topics.jsp | grep -A 4 topic

You should see the open and close tags for the topic list.

Right now there are no topics to see.

It can take 60 seconds or more for the activemq daemon to finish initializing and start answering queries. If you don't get any results, try the curl command again without the -silent argument and the grep

curl --user admin:badpassword http://localhost:8161/admin/xml/topics.jsp

The message below means either that the ActiveMQ service is not running or has not finished initializing.

curl: (7) couldn't connect to host

If this persists longer than 60 seconds and the ActiveMQ daemon is running you can check the ActiveMQ log file:

more /var/log/activemq/activemq.log

Setting up MCollective

The broker application on Host 1 will use MCollective to communicate with the node hosts. MCollective, in turn, relies on Apache ActiveMQ.

INSTALLING

To install all of the packages needed for MCollective, run the following command:

yum install mcollective-client
CONFIGURING

To configure the MCollective client, delete the contents of the /etc/mcollective/client.cfg file and replace them with the following:

topicprefix = /topic/main_collective = mcollectivecollectives = mcollectivelibdir = /usr/libexec/mcollectivelogfile = /var/log/mcollective-client.logloglevel = debug# Pluginssecurityprovider = pskplugin.psk = unsetconnector = stompplugin.stomp.host = localhostplugin.stomp.port = 61613plugin.stomp.user = mcollectiveplugin.stomp.password = marionette

Note: Change the setting for "plugin.stomp.host" from "broker.example.com" to the actual hostname of Host 1, and use the same password for the mcollective user that you specified in activemq.xml.

It's also important to change the group ownership so that the Broker's Apache can write to the log file for MCollective:

chown root:apache /var/log/mcollective-client.log

The above steps are performed by the configure_mcollective_for_activemq_on_broker function in the kickstart script.

Because we are running the ActiveMQ service on Host 1, we can configure mcollective to connect to localhost. Theplugin.stomp.host setting must be modified if you are configuring the ActiveMQ service to run on another host (or pool of hosts).

Setting up the Broker Application

In this section, we will configure the broker Rails application that provides the REST API to the client tools.

INSTALLING

To install all of the packages needed for these instructions, run the following command:

yum install openshift-origin-broker openshift-origin-broker-util rubygem-openshift-origin-auth-remote-user rubygem-openshift-origin-msg-broker-mcollective rubygem-openshift-origin-dns-bind

This step is performed by the install_broker_pkgs function in the kickstart script.

Configure Broker Proxy ServerName

The default value of ServerName is localhost and should be changed to accurately reflect your broker hostname.

sed -i -e "s/ServerName .*$/ServerName `hostname`/" /etc/httpd/conf.d/000000_openshift_origin_broker_proxy.conf

Setting up Required Services

You also need to configure all of the required system services to start when you reboot Host 1. Run the following commands:

chkconfig httpd onchkconfig network onchkconfig ntpd onchkconfig sshd on

The following commands configure the firewall to allow access to all of these services.

lokkit --service=sshlokkit --service=httpslokkit --service=http

The above steps are performed by the enable_services_on_broker function in the kickstart script.

CONFIGURING SELINUX

OpenShift relies heavily on SELinux to maintain isolation between applications and to protect OpenShift from malicious applications, and also from applications that contain bugs. To configure SELinux, you need to install the appropriate policy, enable the required permissions (for example, for httpd and named), and label various files appropriately.

Setting Standard SELinux Boolean Variables

Use the setsebool command to set a number of Boolean variables that are provided by the standard SELinux policy:

setsebool -P httpd_unified=on httpd_can_network_connect=on httpd_can_network_relay=on httpd_run_stickshift=on named_write_master_zones=on allow_ypbind=on

The following table explains these Boolean variables:

Boolean Variable Purpose
httpd_unified Allow the broker to write files in the "http" file

context.

httpd_can_network_connect Allow the broker application to access the

network.

httpd_can_network_relay Allow the broker application to access the
httpd_run_stickshift Enable passenger-related permissions.
named_write_master_zones Allow the broker application to configure DNS.
allow_ypbind Allow the broker application to use ypbind to

communicate directly with the name server.

You now need to use the fixfiles and restorecon commands to relabel a number of files and directories with the correct SELinux contexts:

fixfiles -R rubygem-passenger restorefixfiles -R mod_passenger restorerestorecon -rv /var/runrestorecon -rv /usr/share/rubygems/gems/passenger-*

The above steps are performed by the configure_selinux_policy_on_broker function in the kickstart script.

CONFIGURING DOMAIN

We must ensure that the configuration for the OpenShift broker is modified to reflect your choice of domain name for this OpenShift installation. You can hand-edit /etc/openshift/broker.conf and modify the CLOUD_DOMAIN setting, or:

sed -i -e "s/^CLOUD_DOMAIN=.*$/CLOUD_DOMAIN=${domain}/" /etc/openshift/broker.conf

This step is performed by the configure_controller function in the kickstart script.

CONFIGURING PLUGINS

We must configure OpenShift to enable required plug-ins for authentication, DNS, and messaging. These plug-ins are configured by editing files under /etc/openshift/plugins.d. The presence of a file foo.conf in this directory enables the plug-in named foo, and the contents of foo.conf contain configuration settings in the form of lines containing key=valuepairs. In some cases, we need only copy an example configuration in place. In the case of the DNS plug-in, we need to perform some configuration.

All of the following steps will involve only files in /etc/openshift/plugins.d, so change to that directory:

cd /etc/openshift/plugins.d

Enable the remote-user auth plug-in by copying the example configuration file as follows:

cp openshift-origin-auth-remote-user.conf.example openshift-origin-auth-remote-user.conf

Enable the mcollective messaging plug-in by copying the example configuration file as follows:

cp openshift-origin-msg-broker-mcollective.conf.example openshift-origin-msg-broker-mcollective.conf

Configure the dns-bind plug-in as follows:

cat <
openshift-origin-dns-bind.confBIND_SERVER="127.0.0.1"BIND_PORT=53BIND_KEYNAME="${domain}"BIND_KEYVALUE="${KEY}"BIND_ZONE="${domain}"EOF

Note: Make sure that ${domain} and ${KEY} are set appropriately (see the section on configuring BIND).

The dns-bind plug-in requires that an additional SELinux policy be compiled and installed using the make and semodulecommands:

pushd /usr/share/selinux/packages/rubygem-openshift-origin-dns-bind/ && make -f /usr/share/selinux/devel/Makefile ; popdsemodule -i /usr/share/selinux/packages/rubygem-openshift-origin-dns-bind/dhcpnamedforward.pp

The above steps are performed by the configure_auth_pluginconfigure_messaging_plugin, and configure_dns_pluginfunctions in the kickstart script.

CONFIGURING AUTHENTICATION

With the remote-user authentication plug-in, the OpenShift broker service relies on the httpd to handle authentication and pass on the authenticated user (the "remote user"). Thus it is necessary to configure authentication in httpd. In a production environment, you may configure httpd to use LDAP, Kerberos, or other industrial-strength technology. For this tutorial, we will take a more modest approach and configure authentication using Basic Auth and an htaccess file.

First, we copy the example httpd configuration file into place:

cp /var/www/openshift/broker/httpd/conf.d/openshift-origin-auth-remote-user-basic.conf.sample /var/www/openshift/broker/httpd/conf.d/openshift-origin-auth-remote-user.conf

The above configuration file configures httpd to use /etc/openshift/htpasswd for its password file. Use the following command to create this file with a single authorized user, "username":

htpasswd -c /etc/openshift/htpasswd username

The above steps are performed by the configure_httpd_auth function in the kickstart script.

CONFIGURING INTER-HOST ACCESS KEYS

We must generate a broker access key to be used by Jenkins and other optional services:

openssl genrsa -out /etc/openshift/server_priv.pem 2048openssl rsa -in /etc/openshift/server_priv.pem -pubout > /etc/openshift/server_pub.pem

We also need to generate a key pair for the broker to use to move gears between nodes:

ssh-keygen -t rsa -b 2048 -f ~/.ssh/rsync_id_rsacp ~/.ssh/rsync_id_rsa* /etc/openshift/

The above steps are performed by the configure_access_keys_on_broker function in the kickstart script.

CONFIGURING INITIAL USER ACCOUNTS

You need to create an account in Mongo for the broker to use. From the broker's command shell, use the MongoDBaddUser command to create this user (but choose a secure password):

mongo openshift_broker_dev --eval 'db.addUser("openshift", "password")'

You should use a secure password; ensure that you edit the /etc/openshift/broker.conf' file and change MONGO_PASSWORD to your password accordingly (rather than the shipped default "mooo").

This step is performed (with default password) by the configure_mongo_password function in the kickstart script.

Verify that the "openshift" and "admin" accounts have been created:

echo 'db.system.users.find()' | mongo openshift_broker_dev

You should see an entry for the "openshift" user.

CONFIGURING BUNDLER

At this point, it is a good idea to verify that Bundler can find the necessary Ruby modules (or "gems") to run the broker Rails application:

cd /var/www/openshift/brokerbundle --local

You should see the following output:

Your bundle is complete! Use `bundle show [gemname]` to see where a bundled gem is installed.

You also need to configure the broker to start when you reboot Host 1. Run the following command:

chkconfig openshift-broker on

Now you should be able to start the broker:

service httpd startservice openshift-broker start

The chkconfig step above is performed by the configure_controller function in the kickstart script.

VERIFYING

Once started, one quick test is to retrieve the REST API base using curl on the broker:

curl -Ik https://localhost/broker/rest/api

Check that you get a 200 OK response. If you do not, try the command again without the "-I" option and look for an error message or Ruby backtrace:

curl -k https://localhost/broker/rest/api

Setting up Host 2 as a Node

This section describes how to install and configure the second OpenShift host, which will will function as a node hosting gears that contain applications. Many of the steps to configure a node are the same as the steps involved in configuring a broker, so some explanatory details are omitted from the following process.

The host running the broker can also be used for the node, or you can use a distinct host that connects to the broker over a network. The instructions here make no assumptions either way. Ordinarily, combining a node and broker on the same host is only done for demonstrations; for a variety of reasons, we recommend that node hosts not include any broker components in a production setting.

You should perform all of the procedures in this section after you have installed and configured the base operating system as described earlier in the Base Operating System Configuration section.

Setting up the Required Repositories

OpenShift Origin currently relies on many packages that are not in Red Hat Enterprise Linux and must be retrieved from OpenShift repositories.

Setting up the OpenShift Node Repository

Host 2 requires packages from the OpenShift Node repository and the OpenShift JBoss repository. To set up this repository:

1. Create the following file:

/etc/yum.repos.d/openshift-node.repo

2. Add the following content:

[openshift_node]name=OpenShift Nodebaseurl=https://mirror.openshift.com/pub/origin-server/nightly/enterprise/2012-11-15/Node/x86_64/os/enabled=1gpgcheck=0

3. Save and close the file.

4. Create the following file:

/etc/yum.repos.d/openshift-jboss.repo

5. Add the following content:

[openshift_jbosseap]name=OpenShift JBossEAPbaseurl=https://mirror.openshift.com/pub/origin-server/nightly/enterprise/2012-11-15/JBoss_EAP6_Cartridge/x86_64/os/enabled=1gpgcheck=0

6. Save and close the file.

The above steps are performed by the configure_node_repo function in the kickstart script.

RUNNING YUM UPDATE

To update all of the base packages needed for these instructions, run the following command.

yum update

It is important to do this to ensure at least the selinux-policy package is updated, as OpenShift relies on a recent update to this package.

In the kickstart script, this step is performed after configuring the repositories.

Create a DNS Record for the Node

In order for communication between the node (Host 2) and the broker (Host 1) to work properly, you must ensure that Host 2's hostname resolves properly. Earlier, we had instructions on setting up a BIND server and configuring the Host 1 to use this BIND server. You must now update DNS (in these instructions, the Host 1 BIND server) to resolve Host 2's hostname.

We must run the following commands on Host 1 because it has the ability to update DNS records. Set the $keyfileenvironment variable on Host 1 to contain the filename for a new DNSSEC key for our domain (replace "example.com" with the domain name you have chosen for this installation of OpenShift):

keyfile=/var/named/example.com.key

Run the following command on Host 1, replacing "example.com" with the domain name you have chosen, and "10.0.0.2" with the IP address of Host 2:

oo-register-dns -h node -d example.com -n 10.0.0.2 -k ${keyfile}

This is a convenience command equivalent to the nsupdate command demonstrated in Host 1 setup.

This step is not performed by the kickstart script, but could be added if you know your node IP addresses in advance.

Configuring Hostname Resolution

Point Host 2 to the named service running on Host 1 so that Host 2 can resolve the hostnames of the broker (Host 1) and any other broker or node hosts that you configure and so that Host 1 can resolve the hostname of Host 2.

On Host 2, edit /etc/resolv.conf and add the OpenShift nameserver (which in these directions is installed on Host 1) at the top of the file, changing "10.0.0.1" to the IP address of Host 1:

nameserver 10.0.0.1

This step is performed by the update_resolv_conf function in the kickstart script.

Enabling Broker Access to node

In order to be able to move gears between nodes, access keys need to be set up for the broker host to access the node host. A previous step created the key pair on the broker host; we now need to enable access to the node host via this key.

scp root@broker.example.com:/etc/openshift/rsync_id_rsa.pub /root/.ssh/    # the above step will ask for the root password of the broker machinecat /root/.ssh/rsync_id_rsa.pub >> /root/.ssh/authorized_keysrm /root/.ssh/rsync_id_rsa.pub

Setting up DHCP and Hostname

We need to perform some general system-wide network configuration on Host 2. Replace eth0 in the filenames below with the appropriate network interface for your system.

CONFIGURING

Perform the following steps on Host 2:

1. To send DNS requests to Host 1 and to assume the appropriate hostname and domain name, configure the DHCP client by appending the following lines to /etc/dhcp/dhclient-eth0.conf. Replace "10.0.0.1" with the actual IP address of Host 1, and replace "node" and "example.com" with the actual hostname and domain name of Host 2:

prepend domain-name-servers 10.0.0.1;supersede host-name "node";supersede domain-name "example.com";

2. Edit /etc/sysconfig/network and set the "HOSTNAME=" parameter to the following, replacing "node.example.com" with the Fully Qualified Domain Name (FQDN) of the Host 2:

HOSTNAME=node.example.com

3. Run the hostname command:

hostname node.example.com

These steps are performed by the configure_network function in the kickstart script.

VERIFYING

Run the hostname command to verify the hostname of Host 2.

hostname

Setting up MCollective

The Host 1, our broker, will use MCollective to communicate with Host 2.

INSTALLING

To install all of the packages needed for MCollective, run the following command:

yum install mcollective openshift-origin-msg-node-mcollective
CONFIGURING

We now configure MCollective so that Host 2 can communicate with the broker service on Host 1.

Replace the contents of /etc/mcollective/server.cfg with the following configuration, changing the setting for "plugin.stomp.host" from "broker.example.com" to the hostname of Host 1, and using the same password for the mcollective user that you specified in activemq.xml:

topicprefix = /topic/main_collective = mcollectivecollectives = mcollectivelibdir = /usr/libexec/mcollectivelogfile = /var/log/mcollective.logloglevel = debugdaemonize = 1direct_addressing = nregisterinterval = 30# Pluginssecurityprovider = pskplugin.psk = unset
connector = stompplugin.stomp.host = broker.example.complugin.stomp.port = 61613plugin.stomp.user = mcollectiveplugin.stomp.password = marionette# Factsfactsource = yamlplugin.yaml = /etc/mcollective/facts.yaml

Make the service restart on reboot with the following command:

chkconfig mcollective on

Now start the mcollective service with the following command:

service mcollective start

The above steps are performed by the configure_mcollective_for_activemq_on_node function in the kickstart script.

VERIFYING

On Host 1, use the mco ping command to verify that Host 1 recognizes Host 2:

mco ping

Setting up the Node

In this section, we will install and configure the packages that specifically provide the node functionality.

INSTALLING CORE PACKAGES

Install the required packages by running the following command:

yum install rubygem-openshift-origin-node rubygem-passenger-native openshift-origin-port-proxy openshift-origin-node-util

This step is performed by the install_node_pkgs function in the kickstart script.

INSTALLING CARTRIDGES

You can also install any desired cartridge packages at this point. A cartridge can be either a web cartridge or a regular cartridge.

web cartridge provides support for a specific type of application to run on OpenShift. For example, a web cartridge exists that supports PHP development, and another exists for Ruby development.

Regular cartridges exist to support additional functionality on which an application may rely. For example, cartridges exist for the MySQL and PostgreSQL database servers.

If you choose not to install a particular cartridge now, you can still do so later. However, a cartridge package must be installed before application developers can create applications that require that particular cartridge.

The following is a list of web cartridge packages that you may want to install:

Package name Description
openshift-origin-cartridge-diy-0.1 diy ("do it yourself") application type
openshift-origin-cartridge-haproxy-1.4 haproxy-1.4 support
openshift-origin-cartridge-jbossews-1.0.noarch JBoss EWS 1.0 support
openshift-origin-cartridge-jbosseap-6.0 JBossEAP 6.0 support
openshift-origin-cartridge-jenkins-1.4 Jenkins server for continuous integration
openshift-origin-cartridge-perl-5.10 mod_perl support
openshift-origin-cartridge-php-5.3 PHP 5.3 support
openshift-origin-cartridge-python-2.6 Python 2.6 support
openshift-origin-cartridge-ruby-1.8 Ruby Rack support running on Phusion Passenger (Ruby 1.8)
openshift-origin-cartridge-ruby-1.9-scl Ruby Rack support running on Phusion Passenger (Ruby 1.9)

The following regular cartridge packages are currently available for installation:

Package name Description
openshift-origin-cartridge-cron-1.4 Embedded crond support
openshift-origin-cartridge-jenkins-client-1.4 Embedded jenkins client
openshift-origin-cartridge-mysql-5.1 Embedded MySQL server
openshift-origin-cartridge-postgresql-8.4 Embedded PostgreSQL server

You can install all of the available cartridges with the yum install openshift-origin-cartridge-* command.

NB: You must install 'openshift-origin-cartridge-cron-1.4.

Due to a packaging issue, the openshift-origin-cartridge-cron-1.4.noarch package installs configuration files that are essential for updating configuration for communication between the broker and the nodes (the facter). To install this package, run the following command:

yum install openshift-origin-cartridge-cron-1.4

NB: Each node host must have the same list of cartridges installed. As currently implemented, gear placement does not take into account differences in the available cartridges per node host. All node hosts are assumed to have the same cartridges, and gear creation will fail on a node host that is missing cartridges required for the gear.

The above steps are performed by the install_cartridges function in the kickstart script.

Setting up Required Services

The node will need to run the SSH daemon to provide application developers with GIT access. The node must also allow HTTP and HTTPS connections to the applications running within gears on the node.

Configure the firewall and set the required system services to start when the node boots by running the following using thelokkit and chkconfig tools:

lokkit --service=sshlokkit --service=httpslokkit --service=httpchkconfig httpd onchkconfig network onchkconfig sshd on

The above steps are performed by the enable_services_on_node function in the kickstart script.

CONFIGURING PAM

OpenShift uses custom PAM configuration to restrict users who connect to gears using ssh'.

Perform the following command to configure PAM on Host 2:

sed -i -e 's|pam_selinux|pam_openshift|g' /etc/pam.d/sshd
for f in "runuser" "runuser-l" "sshd" "su" "system-auth-ac"do  t="/etc/pam.d/$f"  if ! grep -q "pam_namespace.so" "$t"  then    echo -e "session\t\trequired\tpam_namespace.so no_unmount_on_close" >> "$t"  fidone

The above step is performed by the configure_pam_on_node function in the kickstart script.

CONFIGURING CGROUPS

Linux kernel cgroups are used on OpenShift node hosts to contain application processes and to fairly allocate resources.

Cgroups use two services which must both be running for cgroups containment to be in effect.

  • cgconfig - the service that provides the LVFS interface to the cgroup subsytems - configured via /etc/cgconfig.conf
  • cgred- a "rules" daemon that assigns new processes to a cgroup based on matching rules - configured via/etc/cgrules.conf

In addition there is a "pseudo-service" called openshift-cgroups which creates the OpenShift cgroups for gear containment within the /cgroups/all sub tree.

Configure cgroups by running the following commands:

cp -f /usr/share/doc/*/cgconfig.conf /etc/cgconfig.confrestorecon -v /etc/cgconfig.confmkdir /cgrouprestorecon -v /cgroupchkconfig cgconfig onchkconfig cgred onchkconfig openshift-cgroups onservice cgconfig restartservice cgred restartservice openshift-cgroups start

Note: For OpenShift to function properly the cgroups-related services must always be started in this order:

  1. service cgconfig start
  2. service cgred start
  3. service openshift-cgroups start

The above step is performed by the configure_cgroups_on_node function in the kickstart script.

Verifying

When the cgroup service is running correctly you should see the following:

  • /etc/cgconfig.conf file exists with SELinux label system_u:object_r:cgconfig_etc_t:s0
  • /etc/cgconfig.conf file joins cpu, cpuacct, memory, freezer and net_cls in /cgroup/all
  • /cgroup directory exists, with SELinux label system_u:object_r:cgroup_t:s0
  • service cgconfig status - Running
  • /cgroup/all directory exists and contains subsystem files for cpu, cpuacct, memory, freezer and net_cls

When the cgred service is running correctly you should see the following:

  • /etc/cgrules.conf exists with SELinux label system_u:object_r:cgrules_etc_t:s0
  • service cgred status - running

When the openshift-cgroups pseudo-service has been run successfully you should see:

  • /cgroup/all/openshift directory exists and contains subsystem files for cpu, cpuacct, memory, freezer and net_cls
CONFIGURING DISK QUOTAS

Disk quotas per gear are enforced by setting user quotas (as each gear corresponds to a system user). The quota values are set in /etc/openshift/resource_limits.conf, where you can change these values to suit your needs:

quota_files number of files the gear is allowed to own.
quota_blocks amount of space the gear is allowed to consume in blocks (1 block = 1024 bytes)

Enforcement of these quotas must be enabled at the filesystem level.

  1. Consult /etc/fstab to determine which device is mounted as /var/lib/openshift (will be the root partition in a simple setup, but more likely a RAID or NAS mount at /var/lib/openshift in production).
  2. Add "usrquota" option for that mount point entry in /etc/fstab
  3. Reboot the node host or remount the mount point (e.g.: mount -o remount /)
  4. Generate user quota info for the mount point:
quotacheck -cmug /

Now create an application and check that your quota took effect with the following command:

repquota -a | grep 
CONFIGURING SELINUX

Configure SELinux policy for the node and fix SELinux contexts by running the following commands.

1. Set Boolean values:

setsebool -P httpd_unified=on httpd_can_network_connect=on httpd_can_network_relay=on httpd_read_user_content=on httpd_enable_homedirs=on httpd_run_stickshift=on allow_polyinstantiation=on

The following table explains these Boolean values:

Boolean Value Purpose
httpd_unified Allow the node to write files in the "http" file context.
httpd_can_network_connect Allow the node to access the network.
httpd_can_network_relay Ditto.
httpd_read_user_content Allow the node to read applications' data.
httpd_enable_homedirs Ditto.
httpd_run_stickshift Ditto.
allow_polyinstantiation Allow polyinstantiation for gear containment.

3. Relabel files with the proper SELinux contexts:

fixfiles -R rubygem-passenger restorefixfiles -R mod_passenger restorerestorecon -rv /var/runrestorecon -rv /usr/share/rubygems/gems/passenger-*restorecon -rv /usr/sbin/mcollectived /var/log/mcollective.log /var/run/mcollectived.pidrestorecon -rv /var/lib/openshift /etc/openshift/node.conf /etc/httpd/conf.d/openshift

The above steps are performed by the configure_selinux_policy_on_node function in the kickstart script.

CONFIGURING SYSCTL SETTINGS

To accommodate OpenShift's intensive use of semaphores, ports, and connection tracking, certain sysctl knobs must be turned by editing /etc/sysctl.conf.

1. Open /etc/sysctl.conf and increase kernel semaphores to accommodate many httpds by appending the following line to the file:

kernel.sem = 250  32000 32  4096

2. Move ephemeral port range to accommodate application proxies by appending the following line to sysctl.conf:

net.ipv4.ip_local_port_range = 15000 35530

3. Increase the connection-tracking table size by appending the following line to sysctl.conf:

net.netfilter.nf_conntrack_max = 1048576

4. Reload sysctl.conf and activate the new settings by running the following command:

sysctl -p /etc/sysctl.conf

The above steps are performed by the configure_sysctl_on_node function in the kickstart script.

CONFIGURING SSHD

Prepare sshd for use on the node.

1. Open /etc/ssh/sshd_config and configure the server to pass the GIT_SSH environment variable through by appending the following line:

AcceptEnv GIT_SSH

2. The SSH server handles a high number of SSH connections from developers connecting to the node to push their changes. To accommodate this volume, increase the limits on the number of connections to the node by running the following commands:

perl -p -i -e "s/^#MaxSessions .*$/MaxSessions 40/" /etc/ssh/sshd_configperl -p -i -e "s/^#MaxStartups .*$/MaxStartups 40/" /etc/ssh/sshd_config

The above steps are performed by the configure_sshd_on_node function in the kickstart script.

CONFIGURING THE PORT PROXY

Recall that applications are contained within gears. These applications listen for connections on the loopback interface. The node runs a proxy that listens on external-facing ports and forwards incoming requests to the appropriate application. Configure that service proxy as follows. On Host 2, perform the following steps:

1. Open the range of ports external that are allocated for application use:

lokkit --port=35531-65535:tcp

2. Set the proxy service to start when Host 2 boots:

chkconfig openshift-port-proxy on

3. Start the service now:

service openshift-port-proxy start

The openshift-gears service script starts gears when a node host is rebooted. Enable this service with the following command:

chkconfig openshift-gears on

The above steps are performed by the configure_port_proxy function in the kickstart script.

CONFIGURING NODE SETTINGS

On Host 2, update the node settings for your chosen hostnames and domain name by making the following changes to/etc/openshift/node.conf:

1. Set the value of "PUBLIC_IP" to the following, replacing "10.0.0.2" by the IP address of the node.

PUBLIC_IP=10.0.0.2

2. Set the value of "CLOUD_DOMAIN" to the following, replacing "example.com" with the domain you are using for your OpenShift installation:

CLOUD_DOMAIN=example.com

3. Set the value of "PUBLIC_HOSTNAME" to the following, replacing "node.example.com" with the hostname of Host 2:

PUBLIC_HOSTNAME=node.example.com

4. Set the value of "BROKER_HOST" to the following, replacing "10.0.0.1" with the IP address of Host 1:

BROKER_HOST=10.0.0.1

The above steps are performed by the configure_node function in the kickstart script.

UPDATING FACTER DATABASE

Facter generates metadata files for MCollective and is normally run by cron. Run facter now to make the initial database and ensure that it runs properly:

/etc/cron.minutely/openshift-facts

The above step is performed by the update_openshift_facts_on_node function in the kickstart script.

Reboot

Reboot the node to enable all changes.

Testing your New OpenShift Installation

In this section, we will describe some useful tips for testing that OpenShift is installed correctly and for diagnosing problems.

MCollective

First, verify that the mcollective daemon is running on the node hosts. Run the following command on each node host:

service mcollective status

If it is not running, start it:

service mcollective start

Perform the above step the above on all node hosts.

The command-line interface to use MCollective is provided by the mco command. This command can be used to perform some diagnostics concerning communication between the broker and node hosts. To get a list of available commands, enter the following command on a broker host:

mco help

In particular, enter the following command on a broker host to see which node hosts the current broker host is aware of:

mco ping

You should see output similar to the following:

node.example.com                         time=100.02 ms---- ping statistics ----1 replies max: 100.02 min: 100.02 avg: 100.02

The output should list all node hosts. If any hosts are missing, verify that they are running and configured properly.

Note that we do not generally want the mcollective daemon running on the broker. The broker uses the mcollective client to contact the nodes, which will be running the daemon. If the broker runs the mcollective daemon, then it will respond to mco ping and effectively behave as both a broker and a node. Unless you have also run the node configuration on the broker host, this will result in problems with creating applications.

CLOCK SKEW

In particular, one frequently seen problem is clock skew. Every mcollective request includes a timestamp, which comes from the sending host's clock. If a sender's clock is substantially behind a recipient's clock, the recipient drops the message. Consequently, a host will not appear in the mco ping output if its clock is too far behind. You can check for this problem by looking in /var/log/mcollective.log:

W, [2012-09-28T11:32:26.249636 #11711]  WARN -- : runner.rb:62:in `run' Message 236aed5ad9e9967eb1447d49392e76d8 from uid=0@broker.example.com created at 1348845978 is 368 seconds old, TTL is 60

The above message indicates that the current host received a message that was 368 seconds old, and it was discarded because its TTL ("time to live," the duration for which it should be considered relevant) was only 60 seconds. You can also run the date command on the different hosts and compare the output across those hosts to check for skew.

The recommended solution is to configure NTP, as described in the earlier instructions. Alternatively, see the documentation for the date command to set the time manually.

BIND and DNS Configuration

Verify that the broker and nodes have network connectivity with one another using the host or ping command. For example, on the node:

host broker.example.com

Verify that broker.example.com resolves correctly on the nodes, and that the hostnames of the nodes resolve correctly on the broker. If they do not, check your DNS configuration in /etc/resolv.conf and the named configuration files described in the section on configuring BIND. Check /var/named/dynamic/${domain}.db to see whether the the domain names of nodes and applications have been added to BIND's database. Note that BIND may maintain a journal under/var/named/dynamic/${domain}.db.jnl. If the ${domain}.db file is out of date, check the ${domain}.db.jnl file for recent changes.

MongoDB

If MongoDB is not properly configured, the rhc tool will fail with unhelpful error messages. Thus if you are receiving unhelpful error messages from rhc, a good place to start is to check the MongoDB configuration.

On the broker, verify that MongoDB is running:

service mongod status

If it is not running, check /var/log/mongodb/mongodb.log for clues. One error to watch out for is the "multiple_occurences" error. If you see this error, check /etc/mongodb.conf for duplicate configuration lines—any duplicates will cause the startup to fail.

If mongod is running, try to connect to the database:

mongo openshift_broker_dev

You should get a command prompt from MongoDB.

Log Files

Following are some log files to look for problems with the broker:

  • /var/www/openshift/broker/log/development.log
  • /var/www/openshift/broker/httpd/logs/access_log
  • /var/www/openshift/broker/httpd/logs/error_log

If there are problems with broker-node communication, check the following file on the broker and on nodes:

  • /var/log/mcollective.log

For general problems on the broker or node, the following log files and configuration files may be helpful:

  • /var/log/messages
  • /var/log/audit/audit.log (for SELinux issues)
  • /var/log/secure (for user/ssh interactions)
  • /etc/openshift/*

On nodes, gear information is stored in the following directory:

  • /var/lib/openshift/

SELinux issues

SELinux denials may cause errors. One way to check whether SELinux is behind some mysterious errors is to temporarily disable policy enforcement using the setenforce command:

setenforce 0

This command disables SELinux from preventing access to resources. If the mysterious errors go away, then the SELinux configuration or permissions need to be fixed. However, even with enforcement disabled, SELinux will still log attempts to access resources that would be denied if SELinux were still enabled. Check the /var/log/audit/audit.log file to see what SELinux is configured to deny, and see the relevant instructions on configuring SELinux (using the setsebool and semodulecommands) and setting contexts (using the chconfixfiles, and restorecon commands).

Remember to enable SELinux again after diagnosing the problem:

setenforce 1

rhc Configuration

If you are seeing problems when you try to use the rhc tool with your OpenShift installation, ensure that rhc is configured properly per the instructions in Part III. Use the "-d" option for rhc to get additional diagnostic output. Standard tools such asnetstat or tcpdump may be useful in diagnosing problems with rhc.

Part III: Developer Workstation

This part of the document explains the operation of your newly installed OpenShift PaaS. It contains instructions on how to prepare a developer workstation to create applications and domains on your OpenShift PaaS.

But before a developer can begin creating applications, a developer account must be created on the broker.

Creating a Developer Account

Create a developer account on the broker using the htpasswd command:

htpasswd -c /etc/openshift/htpasswd bob

The htpasswd command will prompt you for a password. Then it will create a new /etc/openshift/htpasswd file and add the user to /etc/openshift/htpasswd.

Omit the -c option when adding subsequent users:

htpasswd /etc/openshift/htpasswd alice

Verify the account has been created by listing the /etc/openshift/htpasswd file:

cat /etc/openshift/htpasswd

You should see a one-line entry for each user.

OpenShift Client Tools

An application developer uses the OpenShift client tools to create domains and applications on the OpenShift PaaS. Instructions to install the client tool are beyond the scope of this document. Therefore, refer to  for instructions on how to install the client tools on supported operating systems.

DNS Configuration

Your workstation must be configured to resolve the host names used in your OpenShift PaaS installation. You have three options to do this, which are listed below.

Option 1: Edit the /etc/resolv.conf file to use a DNS server that will resolve the addresses used for the broker and any applications on your OpenShift PaaS.

Option 2: Add the required addresses to the /etc/hosts file on the workstation.

Option 3: Use the OpenShift client tools directly on the broker.

Setting up the Required Repository

The OpenShift client tools are available in the OpenShift Client repository. To set up the repository:

1. Create the following file:

/etc/yum.repos.d/openshift-client.repo

2. Add the following content:

[openshift_client]name=OpenShift Clientbaseurl=https://mirror.openshift.com/pub/origin-server/nightly/enterprise/2012-11-15/Client/x86_64/os/enabled=1gpgcheck=0

3. Save and close the file.

The above steps are performed by the configure_client_repo function in the kickstart script. The script installs the client tools on both nodes and brokers so that it is available for diagnostics

Configuring OpenShift Client Tools on Workstation

The client tools by default connect to Red Hat's OpenShift hosted service. To use the client tools with your OpenShift PaaS installation, you can override the default server using an environment variable:

export LIBRA_SERVER=broker.example.com

Now use the command below to run the OpenShift interactive setup wizard and create a configuration file.

Note: This will overwrite any existing configuration contained in the ~/.openshift/express.conf file.

rhc setup

Using Multiple OpenShift Configuration Files

The OpenShift interactive setup wizard creates a new express.conf configuration file, with the specified user and server settings, in the ~/.openshift directory. You can have multiple configuration files, and use the --config option with the OpenShift client tools to select which configuration file is used, as shown in the example below.

mv ~/.openshift/express.conf ~/.openshift/express.conf.oldrhc setup# Answer the questions when prompted by rhc setup.mv ~/.openshift/express.conf ~/.openshift/bob.confmv ~/.openshift/express.conf.old ~/.openshift/express.confrhc domain show# You should see the domain for your account configured in express.conf.rhc domain show --config ~/.openshift/bob.conf# You should see the domain for your account configured in bob.conf.

Creating a Domain and Application

Test the creation of a new domain and a new application by running the following commands on the developer's workstation:

rhc domain create testdomrhc app create testapp php

If the commands succeed, congratulations! If you receive an error message, you can use the "-d" command-line option to get additional debugging output, and you can check the logs on the broker for hints. Always feel free to post comments below or to visit #openshift-dev on Freenode for help debugging any issues.

转载地址:http://wggmb.baihongyu.com/

你可能感兴趣的文章
骇客攻击:跳板攻击与防御(1)(转)
查看>>
黑客入侵计中计(转)
查看>>
谈DoS攻击和DDoS的攻击方式(转)
查看>>
Word 2003 视频教程-关闭 Word(转)
查看>>
JBuilder8配置CVSNT 2.0 (转)
查看>>
分布式反射:新一代的DDoS攻击(转)
查看>>
SYN Flood攻击的基本原理(转)
查看>>
软件开发怎么管?---产品、过程、人员三要素 (转)
查看>>
用dhtml做了一个密码管理器 (转)
查看>>
Php 3.x与4.x中关于对象编程的不兼容问题 (转)
查看>>
Cg FAQ (转)
查看>>
在access中增加农历支持模块. (转)
查看>>
增加一个判断内存变量存在的函数 (转)
查看>>
ASP文件上传神功 第二重(招势图加内功心法) (转)
查看>>
JSR227:J2EE数据绑定及数据访问标准 (转)
查看>>
Sun ONE Studio 4 Mobile Edition开发MIDlet入门 (转)
查看>>
Jbuilder8开发J2ee学习笔记(2) (转)
查看>>
Makefile编写小说(一) (转)
查看>>
NeHe的opengl教程delphi版(3)----着色 (转)
查看>>
ORACLE SQL性能优化系列 (二) (转)
查看>>