This chapter provides an overview of cluster installation and
administration.
9.1 Installation
Previous versions of TruCluster supported three installation types: full installation, rolling upgrade, and simultaneous upgrade. The rolling upgrade and simultaneous upgrade procedures provided a way to preserve the existing cluster or ASE configuration information while installing a later version of the product.
TruCluster Server Version 5.0 and Version 5.0A support two installation types: a full installation and an upgrade procedure. Rolling upgrade is not supported for these initial releases because there are significant changes to both the base operating system and the cluster architectures. For this reason, the recommended installation path is a full installation of the base operating system followed by a full installation of the TruCluster Server.
The TruCluster Server
Software Installation
manual describes how to
upgrade a Version 5.0 cluster to a Version 5.0A cluster.
That manual
also provides three options for customers upgrading to Version 5.0A
from TruCluster Software Version 1.5 or Version 1.6 products.
Two of
these upgrade options use scripts specifically designed to facilitate the
migration of storage from the old cluster (rz*
style device names) to the new cluster (dsk*
style
device names).
TruCluster Server Version 5.0A incorporates the software infrastructure
required to support future rolling upgrades.
Customers who install
TruCluster Server Version 5.0A will be able to perform a rolling upgrade
to the next TruCluster Server release.
As part of preparing for rolling
upgrades, TruCluster Server Version 5.0A provides a new command,
clu_upgrade
, which will control the rolling of a
cluster to the next release.
Note that this command will be of use
only when you are installing the release that follows Version 5.0A.
See
clu_upgrade
(8)
for a description of the
clu_upgrade
command.
One major difference in installing TruCluster Server Version 5.* is that you install Tru64 UNIX on only one system in the cluster. Because CFS creates shared clusterwide file systems, once a cluster is created, additional members boot into the cluster and have access to these files. (In previous releases, you had to install the base operating system on all cluster members, and there were no clusterwide file systems.)
For TruCluster Server, the initial creation of a cluster, the adding of
members, and the removing of members are accomplished through three
interactive installation scripts:
clu_create
,
clu_add_member
, and
clu_delete_member
.
The scripts provide online help
and write log files to the
/cluster/admin
directory.
The following list outlines the steps needed to form a new TruCluster Server cluster:
Using the information in the TruCluster Server Hardware Configuration manual, configure the system and storage hardware and firmware.
Selecting AdvFS file systems, install Tru64 UNIX on a private disk on the system that will become the first cluster member.
Configure the Tru64 UNIX system, including network and time services. Load and configure the applications you plan to use in the cluster.
Load the TruCluster Server license and software.
Note
Each cluster member must have both a Tru64 UNIX license and a TruCluster Server license.
Run the
clu_create
command to
create the boot disk for the first cluster member, and to create and
populate the clusterwide root (/
),
/usr
, and
/var
AdvFS file
systems.
Halt the Tru64 UNIX system and boot the disk containing
the first member's cluster boot partition.
As the system boots, it forms
a single-member cluster and mounts the clusterwide
root (/
),
/usr
, and
/var
file systems.
Log in to the single-member cluster and run the
clu_add_member
command to add members to the cluster.
See the TruCluster Server
Software Installation
manual for more information on installing
TruCluster Server.
9.2 Administration
Having a clusterwide file namespace greatly simplifies cluster
management.
A cluster has just one copy of most system configuration
files.
For example, a cluster is managed as a single security domain
through one
/etc/group
file and one
/etc/passwd
file.
User access to files is independent of which node a user is logged in on, and which node is serving the file. File permissions and access control lists (ACLs) are uniform across the cluster.
Audit logs are kept in a common location; each member's host name is appended to its log files to avoid confusion when tracking audit events.
In most cases, the fact that you are administering a cluster rather than a single system becomes apparent because of the occasional need to manage one of the following aspects of a TruCluster Server environment. Each item is followed by one or more of the cluster-specific commands used to manage or monitor it. With the exception of the installation scripts, you can use the SysMan Menu and SysMan Station GUIs to perform the related command-line functions.
Cluster creation and configuration, which supports creating the
initial cluster member, adding and deleting members, and querying the
cluster configuration (clu_create
,
clu_add_member
,
clu_delete_member
, and
clu_check_config
).
Cluster application availability (CAA), which allows you to
define and manage highly available
applications
(caa_profile
,
caa_register
,
caa_unregister
,
caa_start
,
caa_stop
,
caa_relocate
,
and
caa_stat
).
Cluster aliases, which provide a single system view from the
network (cluamgr
).
Cluster quorum and votes, which
determine what constitutes a valid cluster and membership in that
cluster, and thereby allows access to cluster
resources (clu_quorum
).
Optional load-balancing of the device request dispatcher subsystem
(drdmgr
).
Optional load-balancing of CFS servers
(cfsmgr
).
In addition to the previous items, there are some
command-level exceptions to the Single System Image (SSI) model.
SSI means that, when possible, the cluster appears to the user like a
single computer system.
For example, when you
execute the
wall
command, the message is sent only to users
logged in on the cluster member where the command executes.
To send a
message to all users logged in on all cluster members, use the
wall -c
form of the command.
The same logic applies
to the
shutdown
command; you can shut down an
individual member or the entire cluster.
See the TruCluster Server Cluster Administration manual for more information on configuring and managing a TruCluster Server cluster.