本文共 67636 字,大约阅读时间需要 225 分钟。
Contents |
This document describes how to create a private PaaS service using OpenShift. It makes a number of simplifying assumptions about the environment of the service. In particular, we will assume that the underlying platform is Red Hat Enterprise Linux 6.3 with Ruby 1.8. One may have to adjust one's configuration for a different environment.
The document is organized into three parts. The first part will provide the reader with an overview of the components of an OpenShift installation, how those components are organized, and how they communicate. In particular, the reader will learn about the broker and node hosts and the software that runs on these hosts. The second part will take the reader, step-by-step, through the process of installing and configuring a broker and one or more nodes. Finally, the last part will explain and demonstrate the operation of the new installation using the rhctool .
There are several components that are involved in the OpenShift PaaS installation. This section will cover the primary components in addition to the various configurations that will be accomplished throughout this guide. All the diagrams you see in the subsequent sections will depict elements described in the following legend.
An OpenShift PaaS installation comprises two logical types of hosts: a broker and one or more nodes. The broker handles the creation and management of user applications, including authenticating users via an authentication service and communication with appropriate nodes. The nodes run the user applications in contained environments called gears. The broker queries and controls nodes using a messaging service.
Communication from any external clients (e.g. client tools or the OpenShift console) is done through the REST API which is hosted by the broker. The broker then communicates through the messaging-service component to the nodes. MCollective is utilized to facilitate querying a set of nodes as well as communicating with individual nodes in a secure manner.
The broker itself must manage various persistent data for the PaaS. To accomplish this, the broker utilizes three distinct interfaces that represent the complete state of the PaaS. The reason for the three interfaces is that each datastore is pluggable and each type of data is usually managed by a separate system. Application data is separated into the following sections:
OpenShift has been designed with redundancy in mind, and each architectural component can be set up in a redundant manner. The broker applications themselves are stateless and can be set up behind a simple HTTP load balancer. The messaging tier is also stateless, and MCollective can be configured to use multiple ActiveMQ endpoints. Multiple MongoDB instances can be combined into a replica set for fault tolerance and high availability.
For simplicity the basic installation demonstrated below will not implement redundancy.
This guide will focus on providing a functional installation but will not set up all components to provide redundancy. Adding redundancy at the appropriate layers will be a supplemental guide since it varies by use case. This guide will install the broker, datastores, and messaging components on the same machine instance. The node will be setup on a separate machine instance. The resulting system topology is shown in the following diagram.
Further documentation on the architecture of OpenShift is in the article .
The instructions in this section describe how to install and configure a basic OpenShift PaaS environment with a broker and one or more nodes. These instructions are intended for Linux administrators and developers with intermediate level experience. They are extremely detailed in order to demonstrate the variety of settings you may configure and where to do so.
In the following steps, it is recommended that you back up any files that you change by running (for example) cp foo foo.orig before editing the file foo.
Before proceeding with the installation and configuration, this section provides some basic information and requirements for installing OpenShift PaaS.
This installation relies on a current RHEL 6.x installation as its base. We recommend installing the "Basic Server" configuration for a base install, though others should work too.
These directions likely work on Enterprise Linux rebuilds such as Scientific Linux or CentOS as well; we invite users of these rebuilds to comment on any relevant differences specific to their OSes.
Although the instructions in this document have been primarily tested on KVM virtual machines, the instructions are applicable to other environments.
Below are the hardware requirements for all hosts, whether configured as a broker or as a node. The hardware requirements are applicable for both physical and virtual environments.
In this example of a basic OpenShift installation, the broker and node are configured with the following parameters:
All of these parameters can be customized as necessary. As detailed in the instructions, the domain name and host names can be easily modified by editing appropriate configuration files. The selection of data-store service, authentication service, and DNS server are implemented as plug-ins to the broker.
Note that while DHCP is supported and assumed in this document, dynamic re-assignment of IP addresses is not supported and may cause problems.
The OpenShift PaaS service publishes the host names of new applications to DNS. The DNS update service negotiates with the owner of a domain so that a sub domain can be allocated. It also establishes authentication credentials to allow automatic updates. The sample configuration uses a private DNS service to allow the OpenShift PaaS service to publish new host names without requiring access to an official DNS service. The application host names will only be visible on the OpenShift PaaS hosts and any workstation configured to use the configured DNS service, unless properly delegated by a public DNS service.
The creation of a private DNS service and establishing a delegation agreement with your IT department are outside the scope of this document. Each organization has its own policies and procedures for managing DNS services. If you want to make the OpenShift PaaS service available in any way, you will have to discuss the delegation requirements at your site with the appropriate personnel.
For your convenience, a sample kickstart script for configuring a host as a broker or as a node (or as both) is available at <>. Note that you will need to alter it to at least enable your RHEL 6 subscription or yum repository during the %post script, and likely other parameters (explained in the script header) as well.
You may also extract the %post section of the kickstart script as a bash script in order to apply the steps against a pre-installed RHEL 6 image. A reboot will be required after running the script in this fashion (which kickstart would automatically do).
The steps in this document explain the actions of the script. The steps and the script are independent in the sense that you can obtain a complete broker or node host just by following the steps manually or just by running the kickstart script. For your convenience, we will point to the corresponding part of the kickstart script for each section in the steps below.
The installation and configuration of the base operating system for an OpenShift PaaS deployment is quite straight forward. The sample installation detailed in this document assumes the operating system to be Red Hat Enterprise Linux 6.3 Server on Host 1 and Host 2.
It is assumed that the base operating system has been configured with RHEL Server entitlement and JBoss EAP6 for the the JBoss EAP6/EWS 1.0 cartridges.
The following steps are common to both Host 1 and Host 2 and should be performed on both hosts.
Numerous OpenShift test installations have been confounded by packages being installed from third-party repositories and products like EPEL or Puppet. The differences in behavior can cause subtle and perplexing problems with the operation of OpenShift that waste a lot of troubleshooting time. Therefore, please ensure that your base system image and repositories include only packages from RHEL 6. Disable any third-party repositories during installation. Even the unsupported RHEL Optional channel should not be enabled (although this hasn't caused any known problems yet).
OpenShift requires NTP to synchronize the system and hardware clocks. This synchronization is necessary for communication between the broker and node hosts; if the clocks are too far out of synchronization, MCollective will drop messages. It is also helpful to have accurate timestamps on files and in log file entries.
On the host, use the ntpdate command to set the system clock (use whatever NTP servers are appropriate for your environment):
ntpdate clock.redhat.com
You will also want to configure ntpd via /etc/ntp.conf to keep the clock synchronized during operation.
If you get the error message "the NTP socket is in use, exiting," then ntpd is already running, but the clock may not be synchronized if it starts too far off. You should stop the service while executing this command.
service ntpd stopntpdate clock.redhat.comservice ntpd start
If you are installing on physical hardware, use the hwclock command to synchronize the hardware clock to the system clock. If you are running on a virtual machine, such as an Amazon EC2 instance, skip this step. Otherwise, enter the following command:
hwclock --systohc
The above steps are performed by the synchronize_clock function in the kickstart script.
It may be desirable to install SSH keys for the root user so that you can interact with the hosts remotely from your personal workstation. First, ensure that root's ssh configuration directory exists and has the correct permissions on the host:
mkdir /root/.sshchmod 700 /root/.ssh
On your workstation, you can either use the ssh-keygen command to generate a new keypair, or use an existing public key. In either case, edit the /root/.ssh/authorized_keys file on the host and append the public key, or use the ssh-copy-idcommand to do the same. For example, on your local workstation, you can issue the following command:
ssh-copy-id root@10.0.0.1
Replace "10.0.0.1" with the actual IP address of the broker in the above command.
The above steps are performed by the install_ssh_keys function in the kickstart script.
This section describes how to install and configure the first OpenShift host, which will be running the Broker, MongoDB, ActiveMQ, and BIND. Each logical component is broken out into an individual section.
You should perform all of the procedures in this section after you have installed and configured the base operating system and before you start installing and configuring any node hosts.
OpenShift Origin currently relies on many packages that are not in Red Hat Enterprise Linux and must be retrieved from OpenShift repositories.
Setting up the OpenShift Infrastructure Repository
Host 1 requires packages from the OpenShift Infrastructure repository for the broker and related packages. To set up the repository:
1. Create the following file:
/etc/yum.repos.d/openshift-infrastructure.repo
2. Add the following content:
[openshift_infrastructure]name=OpenShift Infrastructurebaseurl=https://mirror.openshift.com/pub/origin-server/nightly/enterprise/2012-11-15//Infrastructure/x86_64/os/enabled=1gpgcheck=0
3. Save and close the file.
The above steps are performed by the configure_broker_repo function in the kickstart script.
To update all of the base packages needed for these instructions, run the following command.
yum update
It is important to do this to ensure at least the selinux-policy package is updated, as OpenShift relies on a recent update to this package.
In the kickstart script, this step is performed after configuring the repositories.
In this section, we will configure BIND on the broker. This is really only for the purpose of getting going easily, and is probably not the configuration you will want in production. Skip this section if you have alternative arrangements for handling DNS updates from OpenShift.
If you wish to have OpenShift update an existing BIND server in your infrastructure, it should be fairly apparent from the ensuing setup how to enable that. If you are using something different, the DNS update plugin can be swapped out; Red Hat does not currently distribute any alternative plugins, but supported customers can engage our professional services, or an experienced administrator can just use the BIND plugin code as a model for writing an alternative plugin.
To install all of the packages needed for these instructions, run the following command.
yum install bind bind-utils
We will be referring frequently to the domain name with which we are configuring this OpenShift installation, so let us set the$domain environment variable for easy reference:
domain=example.com
Note: You may replace "example.com" with the domain name you have chosen for this installation of OpenShift.
Next, set the $keyfile environment variable to contain the filename for a new DNSSEC key for our domain (we will create this key shortly):
keyfile=/var/named/${domain}.key
We will use the dnssec-keygen tool to generate the new DNSSEC key for the domain. Run the following commands to delete any old keys and generate a new key:
rm -vf /var/named/K${domain}*pushd /var/nameddnssec-keygen -a HMAC-MD5 -b 512 -n USER -r /dev/urandom ${domain}KEY="$(grep Key: K${domain}*.private | cut -d ' ' -f 2)"popd
Notice that we have set the $KEY environment variable to hold the newly generated key. We will use this key in a later step.
Next, we must ensure we have a key for the broker to communicate with BIND. We use the rndc-confgen command to generate the appropriate configuration files for rndc, which is the tool that the broker will use to perform this communication.
rndc-confgen -a -r /dev/urandom
We must ensure that the ownership, permissions, and SELinux context are set appropriately for this new key:
restorecon -v /etc/rndc.* /etc/named.*chown -v root:named /etc/rndc.keychmod -v 640 /etc/rndc.key
We are configuring the local BIND instance so that the broker and nodes will be able to resolve internal hostnames. However, the broker and node will still need to be able to handle requests to resolve hostnames on the broader Internet. To this end, we configure BIND to forward such requests to regular DNS servers. To this end, create the file/var/named/forwarders.conf with the following content:
forwarders { 8.8.8.8; 8.8.4.4; } ;
Note: Change the above list of forwarders as appropriate to comply with your local network's requirements.
Again, we must ensure that the permissions and SELinux context are set appropriately for the new forwarders.conf file:
restorecon -v /var/named/forwarders.confchmod -v 755 /var/named/forwarders.conf
We need to configure BIND to perform resolution for hostnames under the domain we are using for our OpenShift installation. To that end, we must create a database for the domain. The dns-bind plug-in includes an example database, which we will use as a template. Delete and create the /var/named/dynamic directory:
rm -rvf /var/named/dynamicmkdir -vp /var/named/dynamic
Now, create an initial named database in a new file named /var/named/dynamic/${domain}.db (where ${domain} is your chosen domain) using the following command (if the shell syntax is unfamiliar, see the ):
cat </var/named/dynamic/${domain}.db\$ORIGIN .\$TTL 1 ; 1 seconds (for testing only)${domain} IN SOA ns1.${domain}. hostmaster.${domain}. ( 2011112904 ; serial 60 ; refresh (1 minute) 15 ; retry (15 seconds) 1800 ; expire (30 minutes) 10 ; minimum (10 seconds) ) NS ns1.${domain}.\$ORIGIN ${domain}.ns1 A 127.0.0.1EOF
Next, we install the DNSSEC key for our domain. Create the file /var/named/${domain}.key (where ${domain} is your chosen domain) using the following command:
cat </var/named/${domain}.keykey ${domain} { algorithm HMAC-MD5; secret "${KEY}";};EOF
We need to set the permissions and SELinux contexts appropriately:
chown -Rv named:named /var/namedrestorecon -rv /var/named
We must also create a new /etc/named.conf file, as follows:
cat </etc/named.conf// named.conf//// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS// server as a caching only nameserver (as a localhost DNS resolver only).//// See /usr/share/doc/bind*/sample/ for example named configuration files.//options { listen-on port 53 { any; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; // set forwarding to the next nearest server (from DHCP response forward only; include "forwarders.conf";};logging { channel default_debug { file "data/named.run"; severity dynamic; };};// use the default rndc keyinclude "/etc/rndc.key"; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndc-key"; };};include "/etc/named.rfc1912.zones";include "${domain}.key";zone "${domain}" IN { type master; file "dynamic/${domain}.db"; allow-update { key ${domain} ; } ;};EOF
Set permissions and SELinux contexts appropriately:
chown -v root:named /etc/named.confrestorecon /etc/named.conf
Configuring Host 1 Name Resolution
To use the local named service to resolve host names in your domain, you now need to update the host's /etc/resolv.conf file. You also need to configure the firewall and start the named service in order to serve local and remote DNS requests for the domain.
To that end, edit /etc/resolv.conf and put the following at the top of the file, changing "10.0.0.1" to the IP address of Host 1::
nameserver 10.0.0.1
Open the firewall rules and make the service restart on reboot with:
lokkit --service=dnschkconfig named on
Use the service command to start BIND ("named") so we can perform some updates immediately:
service named start
Tell BIND about the broker using the nsupdate command to open an interactive session. "server," "update," and "send" are commands to the nsupdate command. CTRL+D closes the interactive session.
Note: Replace "broker.example.com" with the actual FQDN of the broker, and replace "10.0.0.1" with the actual IP address of the broker.
nsupdate -k ${keyfile}server 127.0.0.1update delete broker.example.com Aupdate add broker.example.com 180 A 10.0.0.1sendquit
The above steps are performed by the configure_named and update_resolv_conf functions in the kickstart script.
Verify that BIND is configured correctly to resolve the broker's hostname:
dig @127.0.0.1 broker.example.com
Verify that BIND properly forwards requests for other hostnames:
dig @127.0.0.1 icann.org a
Verify that the broker is using the local BIND instance by running the following command on the broker:
dig broker.example.com
In this section, we will perform some system-wide network configuration on the broker. No new packages need to be installed for this step, so we will go right to configuration.
NB: We will assume in this section that the broker is using the eth0 network interface. Substitute the appropriate interface in the filenames in the instructions below.
First, we will configure the DHCP client on the broker. Modify /etc/dhcp/dhclient-eth0.conf to use the local BIND instance and assume the appropriate hostname and domain name. Edit dhclient-eth0.conf and append the following lines to the end of the file:
prepend domain-name-servers 10.0.0.1;supersede host-name "broker";supersede domain-name "example.com";
NB: Replace "10.0.0.1" with the actual IP address of the broker, and replace "broker" and "example.com" with the appropriate hostname and domain name.
Second, we need to set the hostname in /etc/sysconfig/network and using the hostname command. Edit the network file. If the file contains a line beginning with "HOSTNAME=", delete the line. Add the following line to the file:
HOSTNAME=broker.example.com
Run the hostname command:
hostname broker.example.com
NB: Replace "broker.example.com" with the actual FQDN of the broker.
The above steps are performed by the configure_network function in the kickstart script.
Run the hostname command to verify the hostname of Host 1.
hostname
MongoDB requires several minor configuration changes to prepare it for use with OpenShift. These include setting up authentication, specifying the default database size, and creating an administrative user.
To install all of the packages needed for MongoDB, run the following command:
yum install mongodb-server
To configure MongoDB to require authentication:
To configure the MongoDB default database size:
Open the firewall rules and make the service restart on reboot with:
chkconfig mongod on
Now start the mongo daemon:
service mongod start
The above steps are performed by the configure_datastore function in the kickstart script.
Run the mongo command to ensure that you can connect to the MongoDB database:
mongo
The command starts an interactive session with the database. Press CTRL+D (the Control key with the "d" key) to leave this session and return to the command shell.
NOTE: The init script in version 2.0.2-1.el6_3 of MongoDB does not function correctly. The start and restart actions return before the daemon is ready to accept connections, and MongoDB may take time to initialize the journal. This initialization may take several minutes. If you receive "Error: couldn't connect to server 127.0.0.1" when you run the mongo command, wait and try again. You can also check the /var/log/mongodb/mongodb.log file. When MongoDB is ready, it will write "waiting for connections" in the log file. The following steps require that a database connection be established.
You need to install and configure ActiveMQ which will be used as the messaging platform to aid in communication between the broker and node hosts.
To install the packages needed for ActiveMQ, run the following command:
yum install activemq
You can configure ActiveMQ by editing the /etc/activemq/activemq.xml file. Create the file using the following command:
cat </etc/activemq/activemq.xml EOF file:\${activemq.conf}/credentials.properties
Note: Replace "broker.example.com" with the actual FQDN of the broker. You are also encouraged to substitute your own passwords (and use the same in the MCollective configuration that follows).
Open the firewall rules and make the service restart on reboot with:
lokkit --port=61613:tcpchkconfig activemq on
Now start the activemq service with:
service activemq start
The above steps are performed by the configure_activemq function in the kickstart script.
As installed the ActiveMQ monitor console web service does not require authentication and will answer on any IP interface. It is important to limit access to the ActiveMQ console for security.
Two changes to the /etc/activemq/jetty.xml file enable authentication and restrict the console to the localhost interface:
sed -i -e '/name="authenticate"/s/false/true/' /etc/activemq/jetty.xmlsed -i -e '/name="port"/a' /etc/activemq/jetty.xml
The admin user definition is set in the /etc/activemq/jetty-realm.properties file. The last line contains the default account for the admin user. It has the form:
# username: password [,rolename ...]
You need to change the password field from the default 'admin' to a password you choose.
sed -i -e '/admin:/s/admin,/badpassword,/' /etc/activemq/jetty-realm.properties
In later test examples we'll use badpassword. You need to substitute your password.
Once ActiveMQ is started, you should be able to verify that it is listening for messages for the Openshift topics. It can take 60 seconds or more for the activemq daemon to finish initializing and start answering queries. First verify that the authentication is working:
curl --head --user admin:badpassword
You should see a 200 OK message followed by the remaining header lines. If you see a "401 Unauthorized" message, then your username/password pair is incorrect.
Next check that the service is returning a topic list.
curl --user admin:badpassword --silent http://localhost:8161/admin/xml/topics.jsp | grep -A 4 topic
You should see the open and close tags for the topic list.
Right now there are no topics to see.
It can take 60 seconds or more for the activemq daemon to finish initializing and start answering queries. If you don't get any results, try the curl command again without the -silent argument and the grep
curl --user admin:badpassword http://localhost:8161/admin/xml/topics.jsp
The message below means either that the ActiveMQ service is not running or has not finished initializing.
curl: (7) couldn't connect to host
If this persists longer than 60 seconds and the ActiveMQ daemon is running you can check the ActiveMQ log file:
more /var/log/activemq/activemq.log
The broker application on Host 1 will use MCollective to communicate with the node hosts. MCollective, in turn, relies on Apache ActiveMQ.
To install all of the packages needed for MCollective, run the following command:
yum install mcollective-client
To configure the MCollective client, delete the contents of the /etc/mcollective/client.cfg file and replace them with the following:
topicprefix = /topic/main_collective = mcollectivecollectives = mcollectivelibdir = /usr/libexec/mcollectivelogfile = /var/log/mcollective-client.logloglevel = debug# Pluginssecurityprovider = pskplugin.psk = unsetconnector = stompplugin.stomp.host = localhostplugin.stomp.port = 61613plugin.stomp.user = mcollectiveplugin.stomp.password = marionette
Note: Change the setting for "plugin.stomp.host" from "broker.example.com" to the actual hostname of Host 1, and use the same password for the mcollective user that you specified in activemq.xml.
It's also important to change the group ownership so that the Broker's Apache can write to the log file for MCollective:
chown root:apache /var/log/mcollective-client.log
The above steps are performed by the configure_mcollective_for_activemq_on_broker function in the kickstart script.
Because we are running the ActiveMQ service on Host 1, we can configure mcollective to connect to localhost. Theplugin.stomp.host setting must be modified if you are configuring the ActiveMQ service to run on another host (or pool of hosts).
In this section, we will configure the broker Rails application that provides the REST API to the client tools.
To install all of the packages needed for these instructions, run the following command:
yum install openshift-origin-broker openshift-origin-broker-util rubygem-openshift-origin-auth-remote-user rubygem-openshift-origin-msg-broker-mcollective rubygem-openshift-origin-dns-bind
This step is performed by the install_broker_pkgs function in the kickstart script.
The default value of ServerName is localhost and should be changed to accurately reflect your broker hostname.
sed -i -e "s/ServerName .*$/ServerName `hostname`/" /etc/httpd/conf.d/000000_openshift_origin_broker_proxy.conf
You also need to configure all of the required system services to start when you reboot Host 1. Run the following commands:
chkconfig httpd onchkconfig network onchkconfig ntpd onchkconfig sshd on
The following commands configure the firewall to allow access to all of these services.
lokkit --service=sshlokkit --service=httpslokkit --service=http
The above steps are performed by the enable_services_on_broker function in the kickstart script.
OpenShift relies heavily on SELinux to maintain isolation between applications and to protect OpenShift from malicious applications, and also from applications that contain bugs. To configure SELinux, you need to install the appropriate policy, enable the required permissions (for example, for httpd and named), and label various files appropriately.
Setting Standard SELinux Boolean Variables
Use the setsebool command to set a number of Boolean variables that are provided by the standard SELinux policy:
setsebool -P httpd_unified=on httpd_can_network_connect=on httpd_can_network_relay=on httpd_run_stickshift=on named_write_master_zones=on allow_ypbind=on
The following table explains these Boolean variables:
Boolean Variable | Purpose |
httpd_unified | Allow the broker to write files in the "http" file context. |
httpd_can_network_connect | Allow the broker application to access the network. |
httpd_can_network_relay | Allow the broker application to access the |
httpd_run_stickshift | Enable passenger-related permissions. |
named_write_master_zones | Allow the broker application to configure DNS. |
allow_ypbind | Allow the broker application to use ypbind to communicate directly with the name server. |
fixfiles -R rubygem-passenger restorefixfiles -R mod_passenger restorerestorecon -rv /var/runrestorecon -rv /usr/share/rubygems/gems/passenger-*
The above steps are performed by the configure_selinux_policy_on_broker function in the kickstart script.
We must ensure that the configuration for the OpenShift broker is modified to reflect your choice of domain name for this OpenShift installation. You can hand-edit /etc/openshift/broker.conf and modify the CLOUD_DOMAIN setting, or:
sed -i -e "s/^CLOUD_DOMAIN=.*$/CLOUD_DOMAIN=${domain}/" /etc/openshift/broker.conf
This step is performed by the configure_controller function in the kickstart script.
We must configure OpenShift to enable required plug-ins for authentication, DNS, and messaging. These plug-ins are configured by editing files under /etc/openshift/plugins.d. The presence of a file foo.conf in this directory enables the plug-in named foo, and the contents of foo.conf contain configuration settings in the form of lines containing key=valuepairs. In some cases, we need only copy an example configuration in place. In the case of the DNS plug-in, we need to perform some configuration.
All of the following steps will involve only files in /etc/openshift/plugins.d, so change to that directory:
cd /etc/openshift/plugins.d
Enable the remote-user auth plug-in by copying the example configuration file as follows:
cp openshift-origin-auth-remote-user.conf.example openshift-origin-auth-remote-user.conf
Enable the mcollective messaging plug-in by copying the example configuration file as follows:
cp openshift-origin-msg-broker-mcollective.conf.example openshift-origin-msg-broker-mcollective.conf
Configure the dns-bind plug-in as follows:
cat <openshift-origin-dns-bind.confBIND_SERVER="127.0.0.1"BIND_PORT=53BIND_KEYNAME="${domain}"BIND_KEYVALUE="${KEY}"BIND_ZONE="${domain}"EOF
Note: Make sure that ${domain} and ${KEY} are set appropriately (see the section on configuring BIND).
The dns-bind plug-in requires that an additional SELinux policy be compiled and installed using the make and semodulecommands:
pushd /usr/share/selinux/packages/rubygem-openshift-origin-dns-bind/ && make -f /usr/share/selinux/devel/Makefile ; popdsemodule -i /usr/share/selinux/packages/rubygem-openshift-origin-dns-bind/dhcpnamedforward.pp
The above steps are performed by the configure_auth_plugin, configure_messaging_plugin, and configure_dns_pluginfunctions in the kickstart script.
With the remote-user authentication plug-in, the OpenShift broker service relies on the httpd to handle authentication and pass on the authenticated user (the "remote user"). Thus it is necessary to configure authentication in httpd. In a production environment, you may configure httpd to use LDAP, Kerberos, or other industrial-strength technology. For this tutorial, we will take a more modest approach and configure authentication using Basic Auth and an htaccess file.
First, we copy the example httpd configuration file into place:
cp /var/www/openshift/broker/httpd/conf.d/openshift-origin-auth-remote-user-basic.conf.sample /var/www/openshift/broker/httpd/conf.d/openshift-origin-auth-remote-user.conf
The above configuration file configures httpd to use /etc/openshift/htpasswd for its password file. Use the following command to create this file with a single authorized user, "username":
htpasswd -c /etc/openshift/htpasswd username
The above steps are performed by the configure_httpd_auth function in the kickstart script.
We must generate a broker access key to be used by Jenkins and other optional services:
openssl genrsa -out /etc/openshift/server_priv.pem 2048openssl rsa -in /etc/openshift/server_priv.pem -pubout > /etc/openshift/server_pub.pemWe also need to generate a key pair for the broker to use to move gears between nodes:
ssh-keygen -t rsa -b 2048 -f ~/.ssh/rsync_id_rsacp ~/.ssh/rsync_id_rsa* /etc/openshift/
The above steps are performed by the configure_access_keys_on_broker function in the kickstart script.
You need to create an account in Mongo for the broker to use. From the broker's command shell, use the MongoDBaddUser command to create this user (but choose a secure password):
mongo openshift_broker_dev --eval 'db.addUser("openshift", "password")'
You should use a secure password; ensure that you edit the /etc/openshift/broker.conf' file and change MONGO_PASSWORD to your password accordingly (rather than the shipped default "mooo").
This step is performed (with default password) by the configure_mongo_password function in the kickstart script.
Verify that the "openshift" and "admin" accounts have been created:
echo 'db.system.users.find()' | mongo openshift_broker_dev
You should see an entry for the "openshift" user.
At this point, it is a good idea to verify that Bundler can find the necessary Ruby modules (or "gems") to run the broker Rails application:
cd /var/www/openshift/brokerbundle --local
You should see the following output:
Your bundle is complete! Use `bundle show [gemname]` to see where a bundled gem is installed.
You also need to configure the broker to start when you reboot Host 1. Run the following command:
chkconfig openshift-broker on
Now you should be able to start the broker:
service httpd startservice openshift-broker start
The chkconfig step above is performed by the configure_controller function in the kickstart script.
Once started, one quick test is to retrieve the REST API base using curl on the broker:
curl -Ik https://localhost/broker/rest/api
Check that you get a 200 OK response. If you do not, try the command again without the "-I" option and look for an error message or Ruby backtrace:
curl -k https://localhost/broker/rest/api
This section describes how to install and configure the second OpenShift host, which will will function as a node hosting gears that contain applications. Many of the steps to configure a node are the same as the steps involved in configuring a broker, so some explanatory details are omitted from the following process.
The host running the broker can also be used for the node, or you can use a distinct host that connects to the broker over a network. The instructions here make no assumptions either way. Ordinarily, combining a node and broker on the same host is only done for demonstrations; for a variety of reasons, we recommend that node hosts not include any broker components in a production setting.
You should perform all of the procedures in this section after you have installed and configured the base operating system as described earlier in the Base Operating System Configuration section.
OpenShift Origin currently relies on many packages that are not in Red Hat Enterprise Linux and must be retrieved from OpenShift repositories.
Setting up the OpenShift Node Repository
Host 2 requires packages from the OpenShift Node repository and the OpenShift JBoss repository. To set up this repository:
1. Create the following file:
/etc/yum.repos.d/openshift-node.repo
2. Add the following content:
[openshift_node]name=OpenShift Nodebaseurl=https://mirror.openshift.com/pub/origin-server/nightly/enterprise/2012-11-15/Node/x86_64/os/enabled=1gpgcheck=0
3. Save and close the file.
4. Create the following file:
/etc/yum.repos.d/openshift-jboss.repo
5. Add the following content:
[openshift_jbosseap]name=OpenShift JBossEAPbaseurl=https://mirror.openshift.com/pub/origin-server/nightly/enterprise/2012-11-15/JBoss_EAP6_Cartridge/x86_64/os/enabled=1gpgcheck=0
6. Save and close the file.
The above steps are performed by the configure_node_repo function in the kickstart script.
To update all of the base packages needed for these instructions, run the following command.
yum update
It is important to do this to ensure at least the selinux-policy package is updated, as OpenShift relies on a recent update to this package.
In the kickstart script, this step is performed after configuring the repositories.
In order for communication between the node (Host 2) and the broker (Host 1) to work properly, you must ensure that Host 2's hostname resolves properly. Earlier, we had instructions on setting up a BIND server and configuring the Host 1 to use this BIND server. You must now update DNS (in these instructions, the Host 1 BIND server) to resolve Host 2's hostname.
We must run the following commands on Host 1 because it has the ability to update DNS records. Set the $keyfileenvironment variable on Host 1 to contain the filename for a new DNSSEC key for our domain (replace "example.com" with the domain name you have chosen for this installation of OpenShift):
keyfile=/var/named/example.com.key
Run the following command on Host 1, replacing "example.com" with the domain name you have chosen, and "10.0.0.2" with the IP address of Host 2:
oo-register-dns -h node -d example.com -n 10.0.0.2 -k ${keyfile}
This is a convenience command equivalent to the nsupdate command demonstrated in Host 1 setup.
This step is not performed by the kickstart script, but could be added if you know your node IP addresses in advance.
Point Host 2 to the named service running on Host 1 so that Host 2 can resolve the hostnames of the broker (Host 1) and any other broker or node hosts that you configure and so that Host 1 can resolve the hostname of Host 2.
On Host 2, edit /etc/resolv.conf and add the OpenShift nameserver (which in these directions is installed on Host 1) at the top of the file, changing "10.0.0.1" to the IP address of Host 1:
nameserver 10.0.0.1
This step is performed by the update_resolv_conf function in the kickstart script.
In order to be able to move gears between nodes, access keys need to be set up for the broker host to access the node host. A previous step created the key pair on the broker host; we now need to enable access to the node host via this key.
scp root@broker.example.com:/etc/openshift/rsync_id_rsa.pub /root/.ssh/ # the above step will ask for the root password of the broker machinecat /root/.ssh/rsync_id_rsa.pub >> /root/.ssh/authorized_keysrm /root/.ssh/rsync_id_rsa.pub
We need to perform some general system-wide network configuration on Host 2. Replace eth0 in the filenames below with the appropriate network interface for your system.
Perform the following steps on Host 2:
1. To send DNS requests to Host 1 and to assume the appropriate hostname and domain name, configure the DHCP client by appending the following lines to /etc/dhcp/dhclient-eth0.conf. Replace "10.0.0.1" with the actual IP address of Host 1, and replace "node" and "example.com" with the actual hostname and domain name of Host 2:
prepend domain-name-servers 10.0.0.1;supersede host-name "node";supersede domain-name "example.com";
2. Edit /etc/sysconfig/network and set the "HOSTNAME=" parameter to the following, replacing "node.example.com" with the Fully Qualified Domain Name (FQDN) of the Host 2:
HOSTNAME=node.example.com
3. Run the hostname command:
hostname node.example.com
These steps are performed by the configure_network function in the kickstart script.
Run the hostname command to verify the hostname of Host 2.
hostname
The Host 1, our broker, will use MCollective to communicate with Host 2.
To install all of the packages needed for MCollective, run the following command:
yum install mcollective openshift-origin-msg-node-mcollective
We now configure MCollective so that Host 2 can communicate with the broker service on Host 1.
Replace the contents of /etc/mcollective/server.cfg with the following configuration, changing the setting for "plugin.stomp.host" from "broker.example.com" to the hostname of Host 1, and using the same password for the mcollective user that you specified in activemq.xml:
topicprefix = /topic/main_collective = mcollectivecollectives = mcollectivelibdir = /usr/libexec/mcollectivelogfile = /var/log/mcollective.logloglevel = debugdaemonize = 1direct_addressing = nregisterinterval = 30# Pluginssecurityprovider = pskplugin.psk = unset
connector = stompplugin.stomp.host = broker.example.complugin.stomp.port = 61613plugin.stomp.user = mcollectiveplugin.stomp.password = marionette# Factsfactsource = yamlplugin.yaml = /etc/mcollective/facts.yaml
Make the service restart on reboot with the following command:
chkconfig mcollective on
Now start the mcollective service with the following command:
service mcollective start
The above steps are performed by the configure_mcollective_for_activemq_on_node function in the kickstart script.
On Host 1, use the mco ping command to verify that Host 1 recognizes Host 2:
mco ping
In this section, we will install and configure the packages that specifically provide the node functionality.
Install the required packages by running the following command:
yum install rubygem-openshift-origin-node rubygem-passenger-native openshift-origin-port-proxy openshift-origin-node-util
This step is performed by the install_node_pkgs function in the kickstart script.
You can also install any desired cartridge packages at this point. A cartridge can be either a web cartridge or a regular cartridge.
A web cartridge provides support for a specific type of application to run on OpenShift. For example, a web cartridge exists that supports PHP development, and another exists for Ruby development.
Regular cartridges exist to support additional functionality on which an application may rely. For example, cartridges exist for the MySQL and PostgreSQL database servers.
If you choose not to install a particular cartridge now, you can still do so later. However, a cartridge package must be installed before application developers can create applications that require that particular cartridge.
The following is a list of web cartridge packages that you may want to install:
Package name | Description |
---|---|
openshift-origin-cartridge-diy-0.1 | diy ("do it yourself") application type |
openshift-origin-cartridge-haproxy-1.4 | haproxy-1.4 support |
openshift-origin-cartridge-jbossews-1.0.noarch | JBoss EWS 1.0 support |
openshift-origin-cartridge-jbosseap-6.0 | JBossEAP 6.0 support |
openshift-origin-cartridge-jenkins-1.4 | Jenkins server for continuous integration |
openshift-origin-cartridge-perl-5.10 | mod_perl support |
openshift-origin-cartridge-php-5.3 | PHP 5.3 support |
openshift-origin-cartridge-python-2.6 | Python 2.6 support |
openshift-origin-cartridge-ruby-1.8 | Ruby Rack support running on Phusion Passenger (Ruby 1.8) |
openshift-origin-cartridge-ruby-1.9-scl | Ruby Rack support running on Phusion Passenger (Ruby 1.9) |
Package name | Description |
---|---|
openshift-origin-cartridge-cron-1.4 | Embedded crond support |
openshift-origin-cartridge-jenkins-client-1.4 | Embedded jenkins client |
openshift-origin-cartridge-mysql-5.1 | Embedded MySQL server |
openshift-origin-cartridge-postgresql-8.4 | Embedded PostgreSQL server |
NB: You must install 'openshift-origin-cartridge-cron-1.4.
Due to a packaging issue, the openshift-origin-cartridge-cron-1.4.noarch package installs configuration files that are essential for updating configuration for communication between the broker and the nodes (the facter). To install this package, run the following command:
yum install openshift-origin-cartridge-cron-1.4
NB: Each node host must have the same list of cartridges installed. As currently implemented, gear placement does not take into account differences in the available cartridges per node host. All node hosts are assumed to have the same cartridges, and gear creation will fail on a node host that is missing cartridges required for the gear.
The above steps are performed by the install_cartridges function in the kickstart script.
The node will need to run the SSH daemon to provide application developers with GIT access. The node must also allow HTTP and HTTPS connections to the applications running within gears on the node.
Configure the firewall and set the required system services to start when the node boots by running the following using thelokkit and chkconfig tools:
lokkit --service=sshlokkit --service=httpslokkit --service=httpchkconfig httpd onchkconfig network onchkconfig sshd on
The above steps are performed by the enable_services_on_node function in the kickstart script.
OpenShift uses custom PAM configuration to restrict users who connect to gears using ssh'.
Perform the following command to configure PAM on Host 2:
sed -i -e 's|pam_selinux|pam_openshift|g' /etc/pam.d/sshd
for f in "runuser" "runuser-l" "sshd" "su" "system-auth-ac"do t="/etc/pam.d/$f" if ! grep -q "pam_namespace.so" "$t" then echo -e "session\t\trequired\tpam_namespace.so no_unmount_on_close" >> "$t" fidone
The above step is performed by the configure_pam_on_node function in the kickstart script.
Linux kernel cgroups are used on OpenShift node hosts to contain application processes and to fairly allocate resources.
Cgroups use two services which must both be running for cgroups containment to be in effect.
In addition there is a "pseudo-service" called openshift-cgroups which creates the OpenShift cgroups for gear containment within the /cgroups/all sub tree.
Configure cgroups by running the following commands:cp -f /usr/share/doc/*/cgconfig.conf /etc/cgconfig.confrestorecon -v /etc/cgconfig.confmkdir /cgrouprestorecon -v /cgroupchkconfig cgconfig onchkconfig cgred onchkconfig openshift-cgroups onservice cgconfig restartservice cgred restartservice openshift-cgroups start
Note: For OpenShift to function properly the cgroups-related services must always be started in this order:
service cgconfig start
service cgred start
service openshift-cgroups start
The above step is performed by the configure_cgroups_on_node function in the kickstart script.
Verifying
When the cgroup service is running correctly you should see the following:
system_u:object_r:cgconfig_etc_t:s0
system_u:object_r:cgroup_t:s0
service cgconfig status
- RunningWhen the cgred service is running correctly you should see the following:
system_u:object_r:cgrules_etc_t:s0
service cgred status
- runningWhen the openshift-cgroups pseudo-service has been run successfully you should see:
Disk quotas per gear are enforced by setting user quotas (as each gear corresponds to a system user). The quota values are set in /etc/openshift/resource_limits.conf, where you can change these values to suit your needs:
quota_files | number of files the gear is allowed to own. |
quota_blocks | amount of space the gear is allowed to consume in blocks (1 block = 1024 bytes) |
quotacheck -cmug /
Now create an application and check that your quota took effect with the following command:
repquota -a | grep
Configure SELinux policy for the node and fix SELinux contexts by running the following commands.
1. Set Boolean values:
setsebool -P httpd_unified=on httpd_can_network_connect=on httpd_can_network_relay=on httpd_read_user_content=on httpd_enable_homedirs=on httpd_run_stickshift=on allow_polyinstantiation=on
The following table explains these Boolean values:
Boolean Value | Purpose |
httpd_unified | Allow the node to write files in the "http" file context. |
httpd_can_network_connect | Allow the node to access the network. |
httpd_can_network_relay | Ditto. |
httpd_read_user_content | Allow the node to read applications' data. |
httpd_enable_homedirs | Ditto. |
httpd_run_stickshift | Ditto. |
allow_polyinstantiation | Allow polyinstantiation for gear containment. |
3. Relabel files with the proper SELinux contexts:
fixfiles -R rubygem-passenger restorefixfiles -R mod_passenger restorerestorecon -rv /var/runrestorecon -rv /usr/share/rubygems/gems/passenger-*restorecon -rv /usr/sbin/mcollectived /var/log/mcollective.log /var/run/mcollectived.pidrestorecon -rv /var/lib/openshift /etc/openshift/node.conf /etc/httpd/conf.d/openshift
The above steps are performed by the configure_selinux_policy_on_node function in the kickstart script.
To accommodate OpenShift's intensive use of semaphores, ports, and connection tracking, certain sysctl knobs must be turned by editing /etc/sysctl.conf.
1. Open /etc/sysctl.conf and increase kernel semaphores to accommodate many httpds by appending the following line to the file:
kernel.sem = 250 32000 32 4096
2. Move ephemeral port range to accommodate application proxies by appending the following line to sysctl.conf:
net.ipv4.ip_local_port_range = 15000 35530
3. Increase the connection-tracking table size by appending the following line to sysctl.conf:
net.netfilter.nf_conntrack_max = 1048576
4. Reload sysctl.conf and activate the new settings by running the following command:
sysctl -p /etc/sysctl.conf
The above steps are performed by the configure_sysctl_on_node function in the kickstart script.
Prepare sshd for use on the node.
1. Open /etc/ssh/sshd_config and configure the server to pass the GIT_SSH environment variable through by appending the following line:
AcceptEnv GIT_SSH
2. The SSH server handles a high number of SSH connections from developers connecting to the node to push their changes. To accommodate this volume, increase the limits on the number of connections to the node by running the following commands:
perl -p -i -e "s/^#MaxSessions .*$/MaxSessions 40/" /etc/ssh/sshd_configperl -p -i -e "s/^#MaxStartups .*$/MaxStartups 40/" /etc/ssh/sshd_config
The above steps are performed by the configure_sshd_on_node function in the kickstart script.
Recall that applications are contained within gears. These applications listen for connections on the loopback interface. The node runs a proxy that listens on external-facing ports and forwards incoming requests to the appropriate application. Configure that service proxy as follows. On Host 2, perform the following steps:
1. Open the range of ports external that are allocated for application use:
lokkit --port=35531-65535:tcp
2. Set the proxy service to start when Host 2 boots:
chkconfig openshift-port-proxy on
3. Start the service now:
service openshift-port-proxy start
The openshift-gears service script starts gears when a node host is rebooted. Enable this service with the following command:
chkconfig openshift-gears on
The above steps are performed by the configure_port_proxy function in the kickstart script.
On Host 2, update the node settings for your chosen hostnames and domain name by making the following changes to/etc/openshift/node.conf:
1. Set the value of "PUBLIC_IP" to the following, replacing "10.0.0.2" by the IP address of the node.
PUBLIC_IP=10.0.0.2
2. Set the value of "CLOUD_DOMAIN" to the following, replacing "example.com" with the domain you are using for your OpenShift installation:
CLOUD_DOMAIN=example.com
3. Set the value of "PUBLIC_HOSTNAME" to the following, replacing "node.example.com" with the hostname of Host 2:
PUBLIC_HOSTNAME=node.example.com
4. Set the value of "BROKER_HOST" to the following, replacing "10.0.0.1" with the IP address of Host 1:
BROKER_HOST=10.0.0.1
The above steps are performed by the configure_node function in the kickstart script.
Facter generates metadata files for MCollective and is normally run by cron. Run facter now to make the initial database and ensure that it runs properly:
/etc/cron.minutely/openshift-facts
The above step is performed by the update_openshift_facts_on_node function in the kickstart script.
Reboot the node to enable all changes.
In this section, we will describe some useful tips for testing that OpenShift is installed correctly and for diagnosing problems.
First, verify that the mcollective daemon is running on the node hosts. Run the following command on each node host:
service mcollective status
If it is not running, start it:
service mcollective start
Perform the above step the above on all node hosts.
The command-line interface to use MCollective is provided by the mco command. This command can be used to perform some diagnostics concerning communication between the broker and node hosts. To get a list of available commands, enter the following command on a broker host:
mco help
In particular, enter the following command on a broker host to see which node hosts the current broker host is aware of:
mco ping
You should see output similar to the following:
node.example.com time=100.02 ms---- ping statistics ----1 replies max: 100.02 min: 100.02 avg: 100.02
The output should list all node hosts. If any hosts are missing, verify that they are running and configured properly.
Note that we do not generally want the mcollective daemon running on the broker. The broker uses the mcollective client to contact the nodes, which will be running the daemon. If the broker runs the mcollective daemon, then it will respond to mco ping and effectively behave as both a broker and a node. Unless you have also run the node configuration on the broker host, this will result in problems with creating applications.
In particular, one frequently seen problem is clock skew. Every mcollective request includes a timestamp, which comes from the sending host's clock. If a sender's clock is substantially behind a recipient's clock, the recipient drops the message. Consequently, a host will not appear in the mco ping output if its clock is too far behind. You can check for this problem by looking in /var/log/mcollective.log:
W, [2012-09-28T11:32:26.249636 #11711] WARN -- : runner.rb:62:in `run' Message 236aed5ad9e9967eb1447d49392e76d8 from uid=0@broker.example.com created at 1348845978 is 368 seconds old, TTL is 60
The above message indicates that the current host received a message that was 368 seconds old, and it was discarded because its TTL ("time to live," the duration for which it should be considered relevant) was only 60 seconds. You can also run the date command on the different hosts and compare the output across those hosts to check for skew.
The recommended solution is to configure NTP, as described in the earlier instructions. Alternatively, see the documentation for the date command to set the time manually.
Verify that the broker and nodes have network connectivity with one another using the host or ping command. For example, on the node:
host broker.example.com
Verify that broker.example.com resolves correctly on the nodes, and that the hostnames of the nodes resolve correctly on the broker. If they do not, check your DNS configuration in /etc/resolv.conf and the named configuration files described in the section on configuring BIND. Check /var/named/dynamic/${domain}.db to see whether the the domain names of nodes and applications have been added to BIND's database. Note that BIND may maintain a journal under/var/named/dynamic/${domain}.db.jnl. If the ${domain}.db file is out of date, check the ${domain}.db.jnl file for recent changes.
If MongoDB is not properly configured, the rhc tool will fail with unhelpful error messages. Thus if you are receiving unhelpful error messages from rhc, a good place to start is to check the MongoDB configuration.
On the broker, verify that MongoDB is running:
service mongod status
If it is not running, check /var/log/mongodb/mongodb.log for clues. One error to watch out for is the "multiple_occurences" error. If you see this error, check /etc/mongodb.conf for duplicate configuration lines—any duplicates will cause the startup to fail.
If mongod is running, try to connect to the database:
mongo openshift_broker_dev
You should get a command prompt from MongoDB.
Following are some log files to look for problems with the broker:
If there are problems with broker-node communication, check the following file on the broker and on nodes:
For general problems on the broker or node, the following log files and configuration files may be helpful:
On nodes, gear information is stored in the following directory:
SELinux denials may cause errors. One way to check whether SELinux is behind some mysterious errors is to temporarily disable policy enforcement using the setenforce command:
setenforce 0
This command disables SELinux from preventing access to resources. If the mysterious errors go away, then the SELinux configuration or permissions need to be fixed. However, even with enforcement disabled, SELinux will still log attempts to access resources that would be denied if SELinux were still enabled. Check the /var/log/audit/audit.log file to see what SELinux is configured to deny, and see the relevant instructions on configuring SELinux (using the setsebool and semodulecommands) and setting contexts (using the chcon, fixfiles, and restorecon commands).
Remember to enable SELinux again after diagnosing the problem:
setenforce 1
If you are seeing problems when you try to use the rhc tool with your OpenShift installation, ensure that rhc is configured properly per the instructions in Part III. Use the "-d" option for rhc to get additional diagnostic output. Standard tools such asnetstat or tcpdump may be useful in diagnosing problems with rhc.
This part of the document explains the operation of your newly installed OpenShift PaaS. It contains instructions on how to prepare a developer workstation to create applications and domains on your OpenShift PaaS.
But before a developer can begin creating applications, a developer account must be created on the broker.
Create a developer account on the broker using the htpasswd command:
htpasswd -c /etc/openshift/htpasswd bob
The htpasswd command will prompt you for a password. Then it will create a new /etc/openshift/htpasswd file and add the user to /etc/openshift/htpasswd.
Omit the -c option when adding subsequent users:
htpasswd /etc/openshift/htpasswd alice
Verify the account has been created by listing the /etc/openshift/htpasswd file:
cat /etc/openshift/htpasswd
You should see a one-line entry for each user.
An application developer uses the OpenShift client tools to create domains and applications on the OpenShift PaaS. Instructions to install the client tool are beyond the scope of this document. Therefore, refer to for instructions on how to install the client tools on supported operating systems.
Your workstation must be configured to resolve the host names used in your OpenShift PaaS installation. You have three options to do this, which are listed below.
Option 1: Edit the /etc/resolv.conf file to use a DNS server that will resolve the addresses used for the broker and any applications on your OpenShift PaaS.
Option 2: Add the required addresses to the /etc/hosts file on the workstation.
Option 3: Use the OpenShift client tools directly on the broker.
The OpenShift client tools are available in the OpenShift Client repository. To set up the repository:
1. Create the following file:
/etc/yum.repos.d/openshift-client.repo
2. Add the following content:
[openshift_client]name=OpenShift Clientbaseurl=https://mirror.openshift.com/pub/origin-server/nightly/enterprise/2012-11-15/Client/x86_64/os/enabled=1gpgcheck=0
3. Save and close the file.
The above steps are performed by the configure_client_repo function in the kickstart script. The script installs the client tools on both nodes and brokers so that it is available for diagnostics
The client tools by default connect to Red Hat's OpenShift hosted service. To use the client tools with your OpenShift PaaS installation, you can override the default server using an environment variable:
export LIBRA_SERVER=broker.example.com
Now use the command below to run the OpenShift interactive setup wizard and create a configuration file.
Note: This will overwrite any existing configuration contained in the ~/.openshift/express.conf file.
rhc setup
The OpenShift interactive setup wizard creates a new express.conf configuration file, with the specified user and server settings, in the ~/.openshift directory. You can have multiple configuration files, and use the --config option with the OpenShift client tools to select which configuration file is used, as shown in the example below.
mv ~/.openshift/express.conf ~/.openshift/express.conf.oldrhc setup# Answer the questions when prompted by rhc setup.mv ~/.openshift/express.conf ~/.openshift/bob.confmv ~/.openshift/express.conf.old ~/.openshift/express.confrhc domain show# You should see the domain for your account configured in express.conf.rhc domain show --config ~/.openshift/bob.conf# You should see the domain for your account configured in bob.conf.
Test the creation of a new domain and a new application by running the following commands on the developer's workstation:
rhc domain create testdomrhc app create testapp php
If the commands succeed, congratulations! If you receive an error message, you can use the "-d" command-line option to get additional debugging output, and you can check the logs on the broker for hints. Always feel free to post comments below or to visit #openshift-dev on Freenode for help debugging any issues.
转载地址:http://wggmb.baihongyu.com/